Defines the penalty which will be applied to an
originator message's tq-field on every hop.
+What: /sys/class/net/<mesh_iface>/mesh/network_coding
+Date: Nov 2012
+Contact: Martin Hundeboll <martin@hundeboll.net>
+Description:
+ Controls whether Network Coding (using some magic
+ to send fewer wifi packets but still the same
+ content) is enabled or not.
+
What: /sys/class/net/<mesh_iface>/mesh/orig_interval
Date: May 2010
Contact: Marek Lindner <lindner_marek@yahoo.de>
--- /dev/null
+* AT91 CAN *
+
+Required properties:
+ - compatible: Should be "atmel,at91sam9263-can" or "atmel,at91sam9x5-can"
+ - reg: Should contain CAN controller registers location and length
+ - interrupts: Should contain IRQ line for the CAN controller
+
+Example:
+
+ can0: can@f000c000 {
+ compatbile = "atmel,at91sam9x5-can";
+ reg = <0xf000c000 0x300>;
+ interrupts = <40 4 5>
+ };
- mac_control : Specifies Default MAC control register content
for the specific platform
- slaves : Specifies number for slaves
-- cpts_active_slave : Specifies the slave to use for time stamping
+- active_slave : Specifies the slave to use for time stamping,
+ ethtool and SIOCGMIIPHY
- cpts_clock_mult : Numerator to convert input clock ticks into nanoseconds
- cpts_clock_shift : Denominator to convert input clock ticks into nanoseconds
-- phy_id : Specifies slave phy id
-- mac-address : Specifies slave MAC address
Optional properties:
- ti,hwmods : Must be "cpgmac0"
- no_bd_ram : Must be 0 or 1
- dual_emac : Specifies Switch to act as Dual EMAC
+
+Slave Properties:
+Required properties:
+- phy_id : Specifies slave phy id
+- mac-address : Specifies slave MAC address
+
+Optional properties:
- dual_emac_res_vlan : Specifies VID to be used to segregate the ports
Note: "ti,hwmods" field is used to fetch the base address and irq
rx_descs = <64>;
mac_control = <0x20>;
slaves = <2>;
- cpts_active_slave = <0>;
+ active_slave = <0>;
cpts_clock_mult = <0x80000000>;
cpts_clock_shift = <29>;
cpsw_emac0: slave@0 {
rx_descs = <64>;
mac_control = <0x20>;
slaves = <2>;
- cpts_active_slave = <0>;
+ active_slave = <0>;
cpts_clock_mult = <0x80000000>;
cpts_clock_shift = <29>;
cpsw_emac0: slave@0 {
--- /dev/null
+Marvell Distributed Switch Architecture Device Tree Bindings
+------------------------------------------------------------
+
+Required properties:
+- compatible : Should be "marvell,dsa"
+- #address-cells : Must be 2, first cell is the address on the MDIO bus
+ and second cell is the address in the switch tree.
+ Second cell is used only when cascading/chaining.
+- #size-cells : Must be 0
+- dsa,ethernet : Should be a phandle to a valid Ethernet device node
+- dsa,mii-bus : Should be a phandle to a valid MDIO bus device node
+
+Optionnal properties:
+- interrupts : property with a value describing the switch
+ interrupt number (not supported by the driver)
+
+A DSA node can contain multiple switch chips which are therefore child nodes of
+the parent DSA node. The maximum number of allowed child nodes is 4
+(DSA_MAX_SWITCHES).
+Each of these switch child nodes should have the following required properties:
+
+- reg : Describes the switch address on the MII bus
+- #address-cells : Must be 1
+- #size-cells : Must be 0
+
+A switch may have multiple "port" children nodes
+
+Each port children node must have the following mandatory properties:
+- reg : Describes the port address in the switch
+- label : Describes the label associated with this port, special
+ labels are "cpu" to indicate a CPU port and "dsa" to
+ indicate an uplink/downlink port.
+
+Note that a port labelled "dsa" will imply checking for the uplink phandle
+described below.
+
+Optionnal property:
+- link : Should be a phandle to another switch's DSA port.
+ This property is only used when switches are being
+ chained/cascaded together.
+
+Example:
+
+ dsa@0 {
+ compatible = "marvell,dsa";
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ interrupts = <10>;
+ dsa,ethernet = <ðernet0>;
+ dsa,mii-bus = <&mii_bus0>;
+
+ switch@0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <16 0>; /* MDIO address 16, switch 0 in tree */
+
+ port@0 {
+ reg = <0>;
+ label = "lan1";
+ };
+
+ port@1 {
+ reg = <1>;
+ label = "lan2";
+ };
+
+ port@5 {
+ reg = <5>;
+ label = "cpu";
+ };
+
+ switch0uplink: port@6 {
+ reg = <6>;
+ label = "dsa";
+ link = <&switch1uplink>;
+ };
+ };
+
+ switch@1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <17 1>; /* MDIO address 17, switch 1 in tree */
+
+ switch1uplink: port@0 {
+ reg = <0>;
+ label = "dsa";
+ link = <&switch0uplink>;
+ };
+ };
+ };
- compatible: "marvell,orion-mdio"
- reg: address and length of the SMI register
+Optional properties:
+- interrupts: interrupt line number for the SMI error/done interrupt
+
The child nodes of the MDIO driver are the individual PHY devices
connected to this MDIO bus. They must have a "reg" property given the
PHY address on the MDIO bus.
neigh/default/gc_thresh1 - INTEGER
Minimum number of entries to keep. Garbage collector will not
purge entries if there are fewer than this number.
- Default: 256
+ Default: 128
neigh/default/gc_thresh3 - INTEGER
Maximum number of neighbor entries allowed. Increase this
is inherited.
[see setsockopt(listenfd, SOL_TCP, TCP_CONGESTION, "name" ...) ]
-tcp_cookie_size - INTEGER
- Default size of TCP Cookie Transactions (TCPCT) option, that may be
- overridden on a per socket basis by the TCPCT socket option.
- Values greater than the maximum (16) are interpreted as the maximum.
- Values greater than zero and less than the minimum (8) are interpreted
- as the minimum. Odd values are interpreted as the next even value.
- Default: 0 (off).
-
tcp_dsack - BOOLEAN
Allows TCP to send "duplicate" SACKs.
Enable Early Retransmit (ER), per RFC 5827. ER lowers the threshold
for triggering fast retransmit when the amount of outstanding data is
small and when no previously unsent data can be transmitted (such
- that limited transmit could be used).
+ that limited transmit could be used). Also controls the use of
+ Tail loss probe (TLP) that converts RTOs occuring due to tail
+ losses into fast recovery (draft-dukkipati-tcpm-tcp-loss-probe-01).
Possible values:
0 disables ER
1 enables ER
by a fourth of RTT. This mitigates connection falsely
recovers when network has a small degree of reordering
(less than 3 packets).
- Default: 2
+ 3 enables delayed ER and TLP.
+ 4 enables TLP only.
+ Default: 3
tcp_ecn - INTEGER
Control use of Explicit Congestion Notification (ECN) by TCP.
Default: 60 seconds
tcp_frto - INTEGER
- Enables Forward RTO-Recovery (F-RTO) defined in RFC4138.
+ Enables Forward RTO-Recovery (F-RTO) defined in RFC5682.
F-RTO is an enhanced recovery algorithm for TCP retransmission
- timeouts. It is particularly beneficial in wireless environments
- where packet loss is typically due to random radio interference
- rather than intermediate router congestion. F-RTO is sender-side
- only modification. Therefore it does not require any support from
- the peer.
-
- If set to 1, basic version is enabled. 2 enables SACK enhanced
- F-RTO if flow uses SACK. The basic version can be used also when
- SACK is in use though scenario(s) with it exists where F-RTO
- interacts badly with the packet counting of the SACK enabled TCP
- flow.
-
-tcp_frto_response - INTEGER
- When F-RTO has detected that a TCP retransmission timeout was
- spurious (i.e, the timeout would have been avoided had TCP set a
- longer retransmission timeout), TCP has several options what to do
- next. Possible values are:
- 0 Rate halving based; a smooth and conservative response,
- results in halved cwnd and ssthresh after one RTT
- 1 Very conservative response; not recommended because even
- though being valid, it interacts poorly with the rest of
- Linux TCP, halves cwnd and ssthresh immediately
- 2 Aggressive response; undoes congestion control measures
- that are now known to be unnecessary (ignoring the
- possibility of a lost retransmission that would require
- TCP to be more cautious), cwnd and ssthresh are restored
- to the values prior timeout
- Default: 0 (rate halving based)
+ timeouts. It is particularly beneficial in networks where the
+ RTT fluctuates (e.g., wireless). F-RTO is sender-side only
+ modification. It does not require any support from the peer.
+
+ By default it's enabled with a non-zero value. 0 disables F-RTO.
tcp_keepalive_time - INTEGER
How often TCP sends out keepalive messages when keepalive is enabled.
that enable and disable the LPI mode when there is nothing to be
transmitted.
-7) TODO:
+7) Extended descriptors
+The extended descriptors give us information about the receive Ethernet payload
+when it is carrying PTP packets or TCP/UDP/ICMP over IP.
+These are not available on GMAC Synopsys chips older than the 3.50.
+At probe time the driver will decide if these can be actually used.
+This support also is mandatory for PTPv2 because the extra descriptors 6 and 7
+are used for saving the hardware timestamps.
+
+8) Precision Time Protocol (PTP)
+The driver supports the IEEE 1588-2002, Precision Time Protocol (PTP),
+which enables precise synchronization of clocks in measurement and
+control systems implemented with technologies such as network
+communication.
+
+In addition to the basic timestamp features mentioned in IEEE 1588-2002
+Timestamps, new GMAC cores support the advanced timestamp features.
+IEEE 1588-2008 that can be enabled when configure the Kernel.
+
+9) SGMII/RGMII supports
+New GMAC devices provide own way to manage RGMII/SGMII.
+This information is available at run-time by looking at the
+HW capability register. This means that the stmmac can manage
+auto-negotiation and link status w/o using the PHYLIB stuff
+In fact, the HW provides a subset of extended registers to
+restart the ANE, verify Full/Half duplex mode and Speed.
+Also thanks to these registers it is possible to look at the
+Auto-negotiated Link Parter Ability.
+
+10) TODO:
o XGMAC is not supported.
- o Add the PTP - precision time protocol
+ o Complete the TBI & RTBI support.
+ o extened VLAN support for 3.70a SYNP GMAC.
PTP HARDWARE CLOCK SUPPORT
M: Richard Cochran <richardcochran@gmail.com>
+L: netdev@vger.kernel.org
S: Maintained
W: http://linuxptp.sourceforge.net/
F: Documentation/ABI/testing/sysfs-ptp
F: drivers/net/ethernet/qlogic/qlcnic/
QLOGIC QLGE 10Gb ETHERNET DRIVER
+M: Shahed Shaikh <shahed.shaikh@qlogic.com>
M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
M: Ron Mercer <ron.mercer@qlogic.com>
M: linux-driver@qlogic.com
F: drivers/usb/gadget/webcam.c
USB WIRELESS RNDIS DRIVER (rndis_wlan)
-M: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
+M: Jussi Kivilinna <jussi.kivilinna@iki.fi>
L: linux-wireless@vger.kernel.org
S: Maintained
F: drivers/net/wireless/rndis_wlan.c
rx_descs = <64>;
mac_control = <0x20>;
slaves = <2>;
- cpts_active_slave = <0>;
+ active_slave = <0>;
cpts_clock_mult = <0x80000000>;
cpts_clock_shift = <29>;
reg = <0x4a100000 0x800
#endif
if (bpf_jit_enable > 1)
- print_hex_dump(KERN_INFO, "BPF JIT code: ",
- DUMP_PREFIX_ADDRESS, 16, 4, ctx.target,
- alloc_size, false);
+ /* there are 2 passes here */
+ bpf_jit_dump(fp->len, alloc_size, 2, ctx.target);
fp->bpf_func = (void *)ctx.target;
out:
struct mv643xx_eth_shared_platform_data *orion_ge_shared_data,
struct resource *orion_ge_resource, unsigned long irq,
struct platform_device *orion_ge_shared,
+ struct platform_device *orion_ge_mvmdio,
struct mv643xx_eth_platform_data *eth_data,
struct platform_device *orion_ge)
{
orion_ge->dev.platform_data = eth_data;
platform_device_register(orion_ge_shared);
+ if (orion_ge_mvmdio)
+ platform_device_register(orion_ge_mvmdio);
platform_device_register(orion_ge);
}
static struct resource orion_ge00_shared_resources[] = {
{
.name = "ge00 base",
- }, {
- .name = "ge00 err irq",
},
};
},
};
+static struct resource orion_ge_mvmdio_resources[] = {
+ {
+ .name = "ge00 mvmdio base",
+ }, {
+ .name = "ge00 mvmdio err irq",
+ },
+};
+
+static struct platform_device orion_ge_mvmdio = {
+ .name = "orion-mdio",
+ .id = -1,
+};
+
static struct resource orion_ge00_resources[] = {
{
.name = "ge00 irq",
unsigned int tx_csum_limit)
{
fill_resources(&orion_ge00_shared, orion_ge00_shared_resources,
- mapbase + 0x2000, SZ_16K - 1, irq_err);
+ mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
+ fill_resources(&orion_ge_mvmdio, orion_ge_mvmdio_resources,
+ mapbase + 0x2004, 0x84 - 1, irq_err);
orion_ge00_shared_data.tx_csum_limit = tx_csum_limit;
ge_complete(&orion_ge00_shared_data,
orion_ge00_resources, irq, &orion_ge00_shared,
+ &orion_ge_mvmdio,
eth_data, &orion_ge00);
}
/*****************************************************************************
* GE01
****************************************************************************/
-struct mv643xx_eth_shared_platform_data orion_ge01_shared_data = {
- .shared_smi = &orion_ge00_shared,
-};
+struct mv643xx_eth_shared_platform_data orion_ge01_shared_data;
static struct resource orion_ge01_shared_resources[] = {
{
.name = "ge01 base",
- }, {
- .name = "ge01 err irq",
- },
+ }
};
static struct platform_device orion_ge01_shared = {
unsigned int tx_csum_limit)
{
fill_resources(&orion_ge01_shared, orion_ge01_shared_resources,
- mapbase + 0x2000, SZ_16K - 1, irq_err);
+ mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
orion_ge01_shared_data.tx_csum_limit = tx_csum_limit;
ge_complete(&orion_ge01_shared_data,
orion_ge01_resources, irq, &orion_ge01_shared,
+ NULL,
eth_data, &orion_ge01);
}
/*****************************************************************************
* GE10
****************************************************************************/
-struct mv643xx_eth_shared_platform_data orion_ge10_shared_data = {
- .shared_smi = &orion_ge00_shared,
-};
+struct mv643xx_eth_shared_platform_data orion_ge10_shared_data;
static struct resource orion_ge10_shared_resources[] = {
{
.name = "ge10 base",
- }, {
- .name = "ge10 err irq",
- },
+ }
};
static struct platform_device orion_ge10_shared = {
unsigned long irq_err)
{
fill_resources(&orion_ge10_shared, orion_ge10_shared_resources,
- mapbase + 0x2000, SZ_16K - 1, irq_err);
+ mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
ge_complete(&orion_ge10_shared_data,
orion_ge10_resources, irq, &orion_ge10_shared,
+ NULL,
eth_data, &orion_ge10);
}
/*****************************************************************************
* GE11
****************************************************************************/
-struct mv643xx_eth_shared_platform_data orion_ge11_shared_data = {
- .shared_smi = &orion_ge00_shared,
-};
+struct mv643xx_eth_shared_platform_data orion_ge11_shared_data;
static struct resource orion_ge11_shared_resources[] = {
{
.name = "ge11 base",
- }, {
- .name = "ge11 err irq",
},
};
unsigned long irq_err)
{
fill_resources(&orion_ge11_shared, orion_ge11_shared_resources,
- mapbase + 0x2000, SZ_16K - 1, irq_err);
+ mapbase + 0x2000, SZ_16K - 1, NO_IRQ);
ge_complete(&orion_ge11_shared_data,
orion_ge11_resources, irq, &orion_ge11_shared,
+ NULL,
eth_data, &orion_ge11);
}
}
if (bpf_jit_enable > 1)
- pr_info("flen=%d proglen=%u pass=%d image=%p\n",
- flen, proglen, pass, image);
+ /* Note that we output the base address of the code_base
+ * rather than image, since opcodes are in code_base.
+ */
+ bpf_jit_dump(flen, proglen, pass, code_base);
if (image) {
- if (bpf_jit_enable > 1)
- print_hex_dump(KERN_ERR, "JIT code: ",
- DUMP_PREFIX_ADDRESS,
- 16, 1, code_base,
- proglen, false);
-
bpf_flush_icache(code_base, code_base + (proglen/4));
/* Function descriptor nastiness: Address + TOC */
((u64 *)image)[0] = (u64)code_base;
.resource = mv643xx_eth_shared_resources,
};
+/*
+ * The orion mdio driver only covers shared + 0x4 up to shared + 0x84 - 1
+ */
+static struct resource mv643xx_eth_mvmdio_resources[] = {
+ [0] = {
+ .name = "ethernet mdio base",
+ .start = 0xf1000000 + MV643XX_ETH_SHARED_REGS + 0x4,
+ .end = 0xf1000000 + MV643XX_ETH_SHARED_REGS + 0x83,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+static struct platform_device mv643xx_eth_mvmdio_device = {
+ .name = "orion-mdio",
+ .id = -1,
+ .num_resources = ARRAY_SIZE(mv643xx_eth_mvmdio_resources),
+ .resource = mv643xx_eth_shared_resources,
+};
+
static struct resource mv643xx_eth_port1_resources[] = {
[0] = {
.name = "eth port1 irq",
static struct platform_device *mv643xx_eth_pd_devs[] __initdata = {
&mv643xx_eth_shared_device,
+ &mv643xx_eth_mvmdio_device,
ð_port1_device,
};
struct device_node *np, int id)
{
struct platform_device *pdev;
- struct resource r[1];
+ struct resource r[2];
int err;
err = of_address_to_resource(np, 0, &r[0]);
if (err)
return ERR_PTR(err);
+ /* register an orion mdio bus driver */
+ r[1].start = r[0].start + 0x4;
+ r[1].end = r[0].start + 0x84 - 1;
+ r[1].flags = IORESOURCE_MEM;
+
+ if (id == 0) {
+ pdev = platform_device_register_simple("orion-mdio", -1, &r[1], 1);
+ if (!pdev)
+ return pdev;
+ }
+
pdev = platform_device_register_simple(MV643XX_ETH_SHARED_NAME, id,
- r, 1);
+ &r[0], 1);
+
return pdev;
}
}
if (bpf_jit_enable > 1)
- pr_err("flen=%d proglen=%u pass=%d image=%p\n",
- flen, proglen, pass, image);
+ bpf_jit_dump(flen, proglen, pass, image);
if (image) {
- if (bpf_jit_enable > 1)
- print_hex_dump(KERN_ERR, "JIT code: ", DUMP_PREFIX_ADDRESS,
- 16, 1, image, proglen, false);
bpf_flush_icache(image, image + proglen);
fp->bpf_func = (void *)image;
}
}
oldproglen = proglen;
}
+
if (bpf_jit_enable > 1)
- pr_err("flen=%d proglen=%u pass=%d image=%p\n",
- flen, proglen, pass, image);
+ bpf_jit_dump(flen, proglen, pass, image);
if (image) {
- if (bpf_jit_enable > 1)
- print_hex_dump(KERN_ERR, "JIT code: ", DUMP_PREFIX_ADDRESS,
- 16, 1, image, proglen, false);
-
bpf_flush_icache(image, image + proglen);
-
fp->bpf_func = (void *)image;
}
out:
if (i)
bcma_err(core->bus, "PLL enable timeout\n");
} else {
- bcma_warn(core->bus, "Disabling PLL not supported yet!\n");
+ /*
+ * Mask the PLL but don't wait for it to be disabled. PLL may be
+ * shared between cores and will be still up if there is another
+ * core using it.
+ */
+ bcma_mask32(core, BCMA_CLKCTLST, ~req);
+ bcma_read32(core, BCMA_CLKCTLST);
}
}
EXPORT_SYMBOL_GPL(bcma_core_pll_ctl);
continue;
}
+ /* Only first GMAC core on BCM4706 is connected and working */
+ if (core->id.id == BCMA_CORE_4706_MAC_GBIT &&
+ core->core_unit > 0)
+ continue;
+
core->dev.release = bcma_release_core_dev;
core->dev.bus = &bcma_bus_type;
dev_set_name(&core->dev, "bcma%d:%d", bus->num, dev_id);
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
+void proc_coredump_connector(struct task_struct *task)
+{
+ struct cn_msg *msg;
+ struct proc_event *ev;
+ __u8 buffer[CN_PROC_MSG_SIZE];
+ struct timespec ts;
+
+ if (atomic_read(&proc_event_num_listeners) < 1)
+ return;
+
+ msg = (struct cn_msg *)buffer;
+ ev = (struct proc_event *)msg->data;
+ get_seq(&msg->seq, &ev->cpu);
+ ktime_get_ts(&ts); /* get high res monotonic timestamp */
+ put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
+ ev->what = PROC_EVENT_COREDUMP;
+ ev->event_data.coredump.process_pid = task->pid;
+ ev->event_data.coredump.process_tgid = task->tgid;
+
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+ msg->len = sizeof(*ev);
+ cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
+}
+
void proc_exit_connector(struct task_struct *task)
{
struct cn_msg *msg;
}
if (!dca2_tag_map_valid(ioatdca->tag_map)) {
- dev_err(&pdev->dev, "APICID_TAG_MAP set incorrectly by BIOS, "
- "disabling DCA\n");
+ WARN_TAINT_ONCE(1, TAINT_FIRMWARE_WORKAROUND,
+ "%s %s: APICID_TAG_MAP set incorrectly by BIOS, disabling DCA\n",
+ dev_driver_string(&pdev->dev),
+ dev_name(&pdev->dev));
free_dca_provider(dca);
return NULL;
}
}
if (dca3_tag_map_invalid(ioatdca->tag_map)) {
- dev_err(&pdev->dev, "APICID_TAG_MAP set incorrectly by BIOS, disabling DCA\n");
+ WARN_TAINT_ONCE(1, TAINT_FIRMWARE_WORKAROUND,
+ "%s %s: APICID_TAG_MAP set incorrectly by BIOS, disabling DCA\n",
+ dev_driver_string(&pdev->dev),
+ dev_name(&pdev->dev));
free_dca_provider(dca);
return NULL;
}
tristate "IP networking over 1394"
depends on FIREWIRE && INET
help
- This enables IPv4 over IEEE 1394, providing IP connectivity with
- other implementations of RFC 2734 as found on several operating
- systems. Multicast support is currently limited.
+ This enables IPv4/IPv6 over IEEE 1394, providing IP connectivity
+ with other implementations of RFC 2734/3146 as found on several
+ operating systems. Multicast support is currently limited.
To compile this driver as a module, say M here: The module will be
called firewire-net.
/*
* IPv4 over IEEE 1394, per RFC 2734
+ * IPv6 over IEEE 1394, per RFC 3146
*
* Copyright (C) 2009 Jay Fenlason <fenlason@redhat.com>
*
#include <asm/unaligned.h>
#include <net/arp.h>
+#include <net/firewire.h>
/* rx limits */
#define FWNET_MAX_FRAGMENTS 30 /* arbitrary, > TX queue depth */
#define IANA_SPECIFIER_ID 0x00005eU
#define RFC2734_SW_VERSION 0x000001U
+#define RFC3146_SW_VERSION 0x000002U
#define IEEE1394_GASP_HDR_SIZE 8
#define RFC2374_HDR_LASTFRAG 2 /* last fragment */
#define RFC2374_HDR_INTFRAG 3 /* interior fragment */
-#define RFC2734_HW_ADDR_LEN 16
-
-struct rfc2734_arp {
- __be16 hw_type; /* 0x0018 */
- __be16 proto_type; /* 0x0806 */
- u8 hw_addr_len; /* 16 */
- u8 ip_addr_len; /* 4 */
- __be16 opcode; /* ARP Opcode */
- /* Above is exactly the same format as struct arphdr */
-
- __be64 s_uniq_id; /* Sender's 64bit EUI */
- u8 max_rec; /* Sender's max packet size */
- u8 sspd; /* Sender's max speed */
- __be16 fifo_hi; /* hi 16bits of sender's FIFO addr */
- __be32 fifo_lo; /* lo 32bits of sender's FIFO addr */
- __be32 sip; /* Sender's IP Address */
- __be32 tip; /* IP Address of requested hw addr */
-} __packed;
-
-/* This header format is specific to this driver implementation. */
-#define FWNET_ALEN 8
-#define FWNET_HLEN 10
-struct fwnet_header {
- u8 h_dest[FWNET_ALEN]; /* destination address */
- __be16 h_proto; /* packet type ID field */
-} __packed;
+static bool fwnet_hwaddr_is_multicast(u8 *ha)
+{
+ return !!(*ha & 1);
+}
/* IPv4 and IPv6 encapsulation header */
struct rfc2734_header {
struct list_head peer_link;
struct fwnet_device *dev;
u64 guid;
- u64 fifo;
- __be32 ip;
/* guarded by dev->lock */
struct list_head pd_list; /* received partial datagrams */
};
/*
+ * Get fifo address embedded in hwaddr
+ */
+static __u64 fwnet_hwaddr_fifo(union fwnet_hwaddr *ha)
+{
+ return (u64)get_unaligned_be16(&ha->uc.fifo_hi) << 32
+ | get_unaligned_be32(&ha->uc.fifo_lo);
+}
+
+/*
* saddr == NULL means use device source address.
* daddr == NULL means leave destination address (eg unresolved arp).
*/
bool is_broadcast, u16 ether_type)
{
struct fwnet_device *dev;
- static const __be64 broadcast_hw = cpu_to_be64(~0ULL);
int status;
__be64 guid;
+ switch (ether_type) {
+ case ETH_P_ARP:
+ case ETH_P_IP:
+#if IS_ENABLED(CONFIG_IPV6)
+ case ETH_P_IPV6:
+#endif
+ break;
+ default:
+ goto err;
+ }
+
dev = netdev_priv(net);
/* Write metadata, and then pass to the receive level */
skb->dev = net;
/*
* Parse the encapsulation header. This actually does the job of
- * converting to an ethernet frame header, as well as arp
- * conversion if needed. ARP conversion is easier in this
- * direction, since we are using ethernet as our backend.
+ * converting to an ethernet-like pseudo frame header.
*/
- /*
- * If this is an ARP packet, convert it. First, we want to make
- * use of some of the fields, since they tell us a little bit
- * about the sending machine.
- */
- if (ether_type == ETH_P_ARP) {
- struct rfc2734_arp *arp1394;
- struct arphdr *arp;
- unsigned char *arp_ptr;
- u64 fifo_addr;
- u64 peer_guid;
- unsigned sspd;
- u16 max_payload;
- struct fwnet_peer *peer;
- unsigned long flags;
-
- arp1394 = (struct rfc2734_arp *)skb->data;
- arp = (struct arphdr *)skb->data;
- arp_ptr = (unsigned char *)(arp + 1);
- peer_guid = get_unaligned_be64(&arp1394->s_uniq_id);
- fifo_addr = (u64)get_unaligned_be16(&arp1394->fifo_hi) << 32
- | get_unaligned_be32(&arp1394->fifo_lo);
-
- sspd = arp1394->sspd;
- /* Sanity check. OS X 10.3 PPC reportedly sends 131. */
- if (sspd > SCODE_3200) {
- dev_notice(&net->dev, "sspd %x out of range\n", sspd);
- sspd = SCODE_3200;
- }
- max_payload = fwnet_max_payload(arp1394->max_rec, sspd);
-
- spin_lock_irqsave(&dev->lock, flags);
- peer = fwnet_peer_find_by_guid(dev, peer_guid);
- if (peer) {
- peer->fifo = fifo_addr;
-
- if (peer->speed > sspd)
- peer->speed = sspd;
- if (peer->max_payload > max_payload)
- peer->max_payload = max_payload;
-
- peer->ip = arp1394->sip;
- }
- spin_unlock_irqrestore(&dev->lock, flags);
-
- if (!peer) {
- dev_notice(&net->dev,
- "no peer for ARP packet from %016llx\n",
- (unsigned long long)peer_guid);
- goto no_peer;
- }
-
- /*
- * Now that we're done with the 1394 specific stuff, we'll
- * need to alter some of the data. Believe it or not, all
- * that needs to be done is sender_IP_address needs to be
- * moved, the destination hardware address get stuffed
- * in and the hardware address length set to 8.
- *
- * IMPORTANT: The code below overwrites 1394 specific data
- * needed above so keep the munging of the data for the
- * higher level IP stack last.
- */
-
- arp->ar_hln = 8;
- /* skip over sender unique id */
- arp_ptr += arp->ar_hln;
- /* move sender IP addr */
- put_unaligned(arp1394->sip, (u32 *)arp_ptr);
- /* skip over sender IP addr */
- arp_ptr += arp->ar_pln;
-
- if (arp->ar_op == htons(ARPOP_REQUEST))
- memset(arp_ptr, 0, sizeof(u64));
- else
- memcpy(arp_ptr, net->dev_addr, sizeof(u64));
- }
-
- /* Now add the ethernet header. */
guid = cpu_to_be64(dev->card->guid);
if (dev_hard_header(skb, net, ether_type,
- is_broadcast ? &broadcast_hw : &guid,
+ is_broadcast ? net->broadcast : net->dev_addr,
NULL, skb->len) >= 0) {
struct fwnet_header *eth;
u16 *rawp;
skb_reset_mac_header(skb);
skb_pull(skb, sizeof(*eth));
eth = (struct fwnet_header *)skb_mac_header(skb);
- if (*eth->h_dest & 1) {
+ if (fwnet_hwaddr_is_multicast(eth->h_dest)) {
if (memcmp(eth->h_dest, net->broadcast,
net->addr_len) == 0)
skb->pkt_type = PACKET_BROADCAST;
return 0;
- no_peer:
+ err:
net->stats.rx_errors++;
net->stats.rx_dropped++;
ver = be32_to_cpu(buf_ptr[1]) & 0xffffff;
source_node_id = be32_to_cpu(buf_ptr[0]) >> 16;
- if (specifier_id == IANA_SPECIFIER_ID && ver == RFC2734_SW_VERSION) {
+ if (specifier_id == IANA_SPECIFIER_ID &&
+ (ver == RFC2734_SW_VERSION
+#if IS_ENABLED(CONFIG_IPV6)
+ || ver == RFC3146_SW_VERSION
+#endif
+ )) {
buf_ptr += 2;
length -= IEEE1394_GASP_HDR_SIZE;
fwnet_incoming_packet(dev, buf_ptr, length, source_node_id,
u8 *p;
int generation;
int node_id;
+ unsigned int sw_version;
/* ptask->generation may not have been set yet */
generation = dev->card->generation;
smp_rmb();
node_id = dev->card->node_id;
+ switch (ptask->skb->protocol) {
+ default:
+ sw_version = RFC2734_SW_VERSION;
+ break;
+#if IS_ENABLED(CONFIG_IPV6)
+ case htons(ETH_P_IPV6):
+ sw_version = RFC3146_SW_VERSION;
+#endif
+ }
+
p = skb_push(ptask->skb, IEEE1394_GASP_HDR_SIZE);
put_unaligned_be32(node_id << 16 | IANA_SPECIFIER_ID >> 8, p);
put_unaligned_be32((IANA_SPECIFIER_ID & 0xff) << 24
- | RFC2734_SW_VERSION, &p[4]);
+ | sw_version, &p[4]);
/* We should not transmit if broadcast_channel.valid == 0. */
fw_send_request(dev->card, &ptask->transaction,
return 0;
}
+static void fwnet_fifo_stop(struct fwnet_device *dev)
+{
+ if (dev->local_fifo == FWNET_NO_FIFO_ADDR)
+ return;
+
+ fw_core_remove_address_handler(&dev->handler);
+ dev->local_fifo = FWNET_NO_FIFO_ADDR;
+}
+
+static int fwnet_fifo_start(struct fwnet_device *dev)
+{
+ int retval;
+
+ if (dev->local_fifo != FWNET_NO_FIFO_ADDR)
+ return 0;
+
+ dev->handler.length = 4096;
+ dev->handler.address_callback = fwnet_receive_packet;
+ dev->handler.callback_data = dev;
+
+ retval = fw_core_add_address_handler(&dev->handler,
+ &fw_high_memory_region);
+ if (retval < 0)
+ return retval;
+
+ dev->local_fifo = dev->handler.offset;
+
+ return 0;
+}
+
+static void __fwnet_broadcast_stop(struct fwnet_device *dev)
+{
+ unsigned u;
+
+ if (dev->broadcast_state != FWNET_BROADCAST_ERROR) {
+ for (u = 0; u < FWNET_ISO_PAGE_COUNT; u++)
+ kunmap(dev->broadcast_rcv_buffer.pages[u]);
+ fw_iso_buffer_destroy(&dev->broadcast_rcv_buffer, dev->card);
+ }
+ if (dev->broadcast_rcv_context) {
+ fw_iso_context_destroy(dev->broadcast_rcv_context);
+ dev->broadcast_rcv_context = NULL;
+ }
+ kfree(dev->broadcast_rcv_buffer_ptrs);
+ dev->broadcast_rcv_buffer_ptrs = NULL;
+ dev->broadcast_state = FWNET_BROADCAST_ERROR;
+}
+
+static void fwnet_broadcast_stop(struct fwnet_device *dev)
+{
+ if (dev->broadcast_state == FWNET_BROADCAST_ERROR)
+ return;
+ fw_iso_context_stop(dev->broadcast_rcv_context);
+ __fwnet_broadcast_stop(dev);
+}
+
static int fwnet_broadcast_start(struct fwnet_device *dev)
{
struct fw_iso_context *context;
unsigned max_receive;
struct fw_iso_packet packet;
unsigned long offset;
+ void **ptrptr;
unsigned u;
- if (dev->local_fifo == FWNET_NO_FIFO_ADDR) {
- dev->handler.length = 4096;
- dev->handler.address_callback = fwnet_receive_packet;
- dev->handler.callback_data = dev;
-
- retval = fw_core_add_address_handler(&dev->handler,
- &fw_high_memory_region);
- if (retval < 0)
- goto failed_initial;
-
- dev->local_fifo = dev->handler.offset;
- }
+ if (dev->broadcast_state != FWNET_BROADCAST_ERROR)
+ return 0;
max_receive = 1U << (dev->card->max_receive + 1);
num_packets = (FWNET_ISO_PAGE_COUNT * PAGE_SIZE) / max_receive;
- if (!dev->broadcast_rcv_context) {
- void **ptrptr;
-
- context = fw_iso_context_create(dev->card,
- FW_ISO_CONTEXT_RECEIVE, IEEE1394_BROADCAST_CHANNEL,
- dev->card->link_speed, 8, fwnet_receive_broadcast, dev);
- if (IS_ERR(context)) {
- retval = PTR_ERR(context);
- goto failed_context_create;
- }
+ ptrptr = kmalloc(sizeof(void *) * num_packets, GFP_KERNEL);
+ if (!ptrptr) {
+ retval = -ENOMEM;
+ goto failed;
+ }
+ dev->broadcast_rcv_buffer_ptrs = ptrptr;
+
+ context = fw_iso_context_create(dev->card, FW_ISO_CONTEXT_RECEIVE,
+ IEEE1394_BROADCAST_CHANNEL,
+ dev->card->link_speed, 8,
+ fwnet_receive_broadcast, dev);
+ if (IS_ERR(context)) {
+ retval = PTR_ERR(context);
+ goto failed;
+ }
- retval = fw_iso_buffer_init(&dev->broadcast_rcv_buffer,
- dev->card, FWNET_ISO_PAGE_COUNT, DMA_FROM_DEVICE);
- if (retval < 0)
- goto failed_buffer_init;
+ retval = fw_iso_buffer_init(&dev->broadcast_rcv_buffer, dev->card,
+ FWNET_ISO_PAGE_COUNT, DMA_FROM_DEVICE);
+ if (retval < 0)
+ goto failed;
- ptrptr = kmalloc(sizeof(void *) * num_packets, GFP_KERNEL);
- if (!ptrptr) {
- retval = -ENOMEM;
- goto failed_ptrs_alloc;
- }
+ dev->broadcast_state = FWNET_BROADCAST_STOPPED;
- dev->broadcast_rcv_buffer_ptrs = ptrptr;
- for (u = 0; u < FWNET_ISO_PAGE_COUNT; u++) {
- void *ptr;
- unsigned v;
+ for (u = 0; u < FWNET_ISO_PAGE_COUNT; u++) {
+ void *ptr;
+ unsigned v;
- ptr = kmap(dev->broadcast_rcv_buffer.pages[u]);
- for (v = 0; v < num_packets / FWNET_ISO_PAGE_COUNT; v++)
- *ptrptr++ = (void *)
- ((char *)ptr + v * max_receive);
- }
- dev->broadcast_rcv_context = context;
- } else {
- context = dev->broadcast_rcv_context;
+ ptr = kmap(dev->broadcast_rcv_buffer.pages[u]);
+ for (v = 0; v < num_packets / FWNET_ISO_PAGE_COUNT; v++)
+ *ptrptr++ = (void *) ((char *)ptr + v * max_receive);
}
+ dev->broadcast_rcv_context = context;
packet.payload_length = max_receive;
packet.interrupt = 1;
retval = fw_iso_context_queue(context, &packet,
&dev->broadcast_rcv_buffer, offset);
if (retval < 0)
- goto failed_rcv_queue;
+ goto failed;
offset += max_receive;
}
retval = fw_iso_context_start(context, -1, 0,
FW_ISO_CONTEXT_MATCH_ALL_TAGS); /* ??? sync */
if (retval < 0)
- goto failed_rcv_queue;
+ goto failed;
/* FIXME: adjust it according to the min. speed of all known peers? */
dev->broadcast_xmt_max_payload = IEEE1394_MAX_PAYLOAD_S100
return 0;
- failed_rcv_queue:
- kfree(dev->broadcast_rcv_buffer_ptrs);
- dev->broadcast_rcv_buffer_ptrs = NULL;
- failed_ptrs_alloc:
- fw_iso_buffer_destroy(&dev->broadcast_rcv_buffer, dev->card);
- failed_buffer_init:
- fw_iso_context_destroy(context);
- dev->broadcast_rcv_context = NULL;
- failed_context_create:
- fw_core_remove_address_handler(&dev->handler);
- failed_initial:
- dev->local_fifo = FWNET_NO_FIFO_ADDR;
-
+ failed:
+ __fwnet_broadcast_stop(dev);
return retval;
}
struct fwnet_device *dev = netdev_priv(net);
int ret;
- if (dev->broadcast_state == FWNET_BROADCAST_ERROR) {
- ret = fwnet_broadcast_start(dev);
- if (ret)
- return ret;
- }
+ ret = fwnet_broadcast_start(dev);
+ if (ret)
+ return ret;
+
netif_start_queue(net);
spin_lock_irq(&dev->lock);
/* ifdown */
static int fwnet_stop(struct net_device *net)
{
- netif_stop_queue(net);
+ struct fwnet_device *dev = netdev_priv(net);
- /* Deallocate iso context for use by other applications? */
+ netif_stop_queue(net);
+ fwnet_broadcast_stop(dev);
return 0;
}
* We might need to rebuild the header on tx failure.
*/
memcpy(&hdr_buf, skb->data, sizeof(hdr_buf));
- skb_pull(skb, sizeof(hdr_buf));
-
proto = hdr_buf.h_proto;
+
+ switch (proto) {
+ case htons(ETH_P_ARP):
+ case htons(ETH_P_IP):
+#if IS_ENABLED(CONFIG_IPV6)
+ case htons(ETH_P_IPV6):
+#endif
+ break;
+ default:
+ goto fail;
+ }
+
+ skb_pull(skb, sizeof(hdr_buf));
dg_size = skb->len;
/*
* Set the transmission type for the packet. ARP packets and IP
* broadcast packets are sent via GASP.
*/
- if (memcmp(hdr_buf.h_dest, net->broadcast, FWNET_ALEN) == 0
- || proto == htons(ETH_P_ARP)
- || (proto == htons(ETH_P_IP)
- && IN_MULTICAST(ntohl(ip_hdr(skb)->daddr)))) {
+ if (fwnet_hwaddr_is_multicast(hdr_buf.h_dest)) {
max_payload = dev->broadcast_xmt_max_payload;
datagram_label_ptr = &dev->broadcast_xmt_datagramlabel;
ptask->dest_node = IEEE1394_ALL_NODES;
ptask->speed = SCODE_100;
} else {
- __be64 guid = get_unaligned((__be64 *)hdr_buf.h_dest);
+ union fwnet_hwaddr *ha = (union fwnet_hwaddr *)hdr_buf.h_dest;
+ __be64 guid = get_unaligned(&ha->uc.uniq_id);
u8 generation;
peer = fwnet_peer_find_by_guid(dev, be64_to_cpu(guid));
- if (!peer || peer->fifo == FWNET_NO_FIFO_ADDR)
+ if (!peer)
goto fail;
generation = peer->generation;
max_payload = peer->max_payload;
datagram_label_ptr = &peer->datagram_label;
- ptask->fifo_addr = peer->fifo;
+ ptask->fifo_addr = fwnet_hwaddr_fifo(ha);
ptask->generation = generation;
ptask->dest_node = dest_node;
ptask->speed = peer->speed;
}
- /* If this is an ARP packet, convert it */
- if (proto == htons(ETH_P_ARP)) {
- struct arphdr *arp = (struct arphdr *)skb->data;
- unsigned char *arp_ptr = (unsigned char *)(arp + 1);
- struct rfc2734_arp *arp1394 = (struct rfc2734_arp *)skb->data;
- __be32 ipaddr;
-
- ipaddr = get_unaligned((__be32 *)(arp_ptr + FWNET_ALEN));
-
- arp1394->hw_addr_len = RFC2734_HW_ADDR_LEN;
- arp1394->max_rec = dev->card->max_receive;
- arp1394->sspd = dev->card->link_speed;
-
- put_unaligned_be16(dev->local_fifo >> 32,
- &arp1394->fifo_hi);
- put_unaligned_be32(dev->local_fifo & 0xffffffff,
- &arp1394->fifo_lo);
- put_unaligned(ipaddr, &arp1394->sip);
- }
-
ptask->hdr.w0 = 0;
ptask->hdr.w1 = 0;
ptask->skb = skb;
peer->dev = dev;
peer->guid = (u64)device->config_rom[3] << 32 | device->config_rom[4];
- peer->fifo = FWNET_NO_FIFO_ADDR;
- peer->ip = 0;
INIT_LIST_HEAD(&peer->pd_list);
peer->pdg_size = 0;
peer->datagram_label = 0;
struct fwnet_device *dev;
unsigned max_mtu;
int ret;
+ union fwnet_hwaddr *ha;
mutex_lock(&fwnet_device_mutex);
dev->card = card;
dev->netdev = net;
+ ret = fwnet_fifo_start(dev);
+ if (ret < 0)
+ goto out;
+ dev->local_fifo = dev->handler.offset;
+
/*
* Use the RFC 2734 default 1500 octets or the maximum payload
* as initial MTU
net->mtu = min(1500U, max_mtu);
/* Set our hardware address while we're at it */
- put_unaligned_be64(card->guid, net->dev_addr);
- put_unaligned_be64(~0ULL, net->broadcast);
+ ha = (union fwnet_hwaddr *)net->dev_addr;
+ put_unaligned_be64(card->guid, &ha->uc.uniq_id);
+ ha->uc.max_rec = dev->card->max_receive;
+ ha->uc.sspd = dev->card->link_speed;
+ put_unaligned_be16(dev->local_fifo >> 32, &ha->uc.fifo_hi);
+ put_unaligned_be32(dev->local_fifo & 0xffffffff, &ha->uc.fifo_lo);
+
+ memset(net->broadcast, -1, net->addr_len);
+
ret = register_netdev(net);
if (ret)
goto out;
list_add_tail(&dev->dev_link, &fwnet_device_list);
- dev_notice(&net->dev, "IPv4 over IEEE 1394 on card %s\n",
+ dev_notice(&net->dev, "IP over IEEE 1394 on card %s\n",
dev_name(card->device));
have_dev:
ret = fwnet_add_peer(dev, unit, device);
if (ret && allocated_netdev) {
unregister_netdev(net);
list_del(&dev->dev_link);
- }
out:
- if (ret && allocated_netdev)
+ fwnet_fifo_stop(dev);
free_netdev(net);
+ }
mutex_unlock(&fwnet_device_mutex);
mutex_lock(&fwnet_device_mutex);
net = dev->netdev;
- if (net && peer->ip)
- arp_invalidate(net, peer->ip);
fwnet_remove_peer(peer, dev);
if (list_empty(&dev->peer_list)) {
unregister_netdev(net);
- if (dev->local_fifo != FWNET_NO_FIFO_ADDR)
- fw_core_remove_address_handler(&dev->handler);
- if (dev->broadcast_rcv_context) {
- fw_iso_context_stop(dev->broadcast_rcv_context);
- fw_iso_buffer_destroy(&dev->broadcast_rcv_buffer,
- dev->card);
- fw_iso_context_destroy(dev->broadcast_rcv_context);
- }
+ fwnet_fifo_stop(dev);
+
for (i = 0; dev->queued_datagrams && i < 5; i++)
ssleep(1);
WARN_ON(dev->queued_datagrams);
.specifier_id = IANA_SPECIFIER_ID,
.version = RFC2734_SW_VERSION,
},
+#if IS_ENABLED(CONFIG_IPV6)
+ {
+ .match_flags = IEEE1394_MATCH_SPECIFIER_ID |
+ IEEE1394_MATCH_VERSION,
+ .specifier_id = IANA_SPECIFIER_ID,
+ .version = RFC3146_SW_VERSION,
+ },
+#endif
{ }
};
.data = rfc2374_unit_directory_data
};
+#if IS_ENABLED(CONFIG_IPV6)
+static const u32 rfc3146_unit_directory_data[] = {
+ 0x00040000, /* directory_length */
+ 0x1200005e, /* unit_specifier_id: IANA */
+ 0x81000003, /* textual descriptor offset */
+ 0x13000002, /* unit_sw_version: RFC 3146 */
+ 0x81000005, /* textual descriptor offset */
+ 0x00030000, /* descriptor_length */
+ 0x00000000, /* text */
+ 0x00000000, /* minimal ASCII, en */
+ 0x49414e41, /* I A N A */
+ 0x00030000, /* descriptor_length */
+ 0x00000000, /* text */
+ 0x00000000, /* minimal ASCII, en */
+ 0x49507636, /* I P v 6 */
+};
+
+static struct fw_descriptor rfc3146_unit_directory = {
+ .length = ARRAY_SIZE(rfc3146_unit_directory_data),
+ .key = (CSR_DIRECTORY | CSR_UNIT) << 24,
+ .data = rfc3146_unit_directory_data
+};
+#endif
+
static int __init fwnet_init(void)
{
int err;
if (err)
return err;
+#if IS_ENABLED(CONFIG_IPV6)
+ err = fw_core_add_descriptor(&rfc3146_unit_directory);
+ if (err)
+ goto out;
+#endif
+
fwnet_packet_task_cache = kmem_cache_create("packet_task",
sizeof(struct fwnet_packet_task), 0, 0, NULL);
if (!fwnet_packet_task_cache) {
err = -ENOMEM;
- goto out;
+ goto out2;
}
err = driver_register(&fwnet_driver.driver);
return 0;
kmem_cache_destroy(fwnet_packet_task_cache);
+out2:
+#if IS_ENABLED(CONFIG_IPV6)
+ fw_core_remove_descriptor(&rfc3146_unit_directory);
out:
+#endif
fw_core_remove_descriptor(&rfc2374_unit_directory);
return err;
{
driver_unregister(&fwnet_driver.driver);
kmem_cache_destroy(fwnet_packet_task_cache);
+#if IS_ENABLED(CONFIG_IPV6)
+ fw_core_remove_descriptor(&rfc3146_unit_directory);
+#endif
fw_core_remove_descriptor(&rfc2374_unit_directory);
}
module_exit(fwnet_cleanup);
MODULE_AUTHOR("Jay Fenlason <fenlason@redhat.com>");
-MODULE_DESCRIPTION("IPv4 over IEEE1394 as per RFC 2734");
+MODULE_DESCRIPTION("IP over IEEE1394 as per RFC 2734/3146");
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(ieee1394, fwnet_id_table);
static int send_connect(struct c4iw_ep *ep)
{
struct cpl_act_open_req *req;
+ struct cpl_t5_act_open_req *t5_req;
struct sk_buff *skb;
u64 opt0;
u32 opt2;
unsigned int mtu_idx;
int wscale;
- int wrlen = roundup(sizeof *req, 16);
+ int size = is_t4(ep->com.dev->rdev.lldi.adapter_type) ?
+ sizeof(struct cpl_act_open_req) :
+ sizeof(struct cpl_t5_act_open_req);
+ int wrlen = roundup(size, 16);
PDBG("%s ep %p atid %u\n", __func__, ep, ep->atid);
opt2 |= WND_SCALE_EN(1);
t4_set_arp_err_handler(skb, NULL, act_open_req_arp_failure);
- req = (struct cpl_act_open_req *) skb_put(skb, wrlen);
- INIT_TP_WR(req, 0);
- OPCODE_TID(req) = cpu_to_be32(
- MK_OPCODE_TID(CPL_ACT_OPEN_REQ, ((ep->rss_qid<<14)|ep->atid)));
- req->local_port = ep->com.local_addr.sin_port;
- req->peer_port = ep->com.remote_addr.sin_port;
- req->local_ip = ep->com.local_addr.sin_addr.s_addr;
- req->peer_ip = ep->com.remote_addr.sin_addr.s_addr;
- req->opt0 = cpu_to_be64(opt0);
- req->params = cpu_to_be32(select_ntuple(ep->com.dev, ep->dst, ep->l2t));
- req->opt2 = cpu_to_be32(opt2);
+ if (is_t4(ep->com.dev->rdev.lldi.adapter_type)) {
+ req = (struct cpl_act_open_req *) skb_put(skb, wrlen);
+ INIT_TP_WR(req, 0);
+ OPCODE_TID(req) = cpu_to_be32(
+ MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
+ ((ep->rss_qid << 14) | ep->atid)));
+ req->local_port = ep->com.local_addr.sin_port;
+ req->peer_port = ep->com.remote_addr.sin_port;
+ req->local_ip = ep->com.local_addr.sin_addr.s_addr;
+ req->peer_ip = ep->com.remote_addr.sin_addr.s_addr;
+ req->opt0 = cpu_to_be64(opt0);
+ req->params = cpu_to_be32(select_ntuple(ep->com.dev,
+ ep->dst, ep->l2t));
+ req->opt2 = cpu_to_be32(opt2);
+ } else {
+ t5_req = (struct cpl_t5_act_open_req *) skb_put(skb, wrlen);
+ INIT_TP_WR(t5_req, 0);
+ OPCODE_TID(t5_req) = cpu_to_be32(
+ MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
+ ((ep->rss_qid << 14) | ep->atid)));
+ t5_req->local_port = ep->com.local_addr.sin_port;
+ t5_req->peer_port = ep->com.remote_addr.sin_port;
+ t5_req->local_ip = ep->com.local_addr.sin_addr.s_addr;
+ t5_req->peer_ip = ep->com.remote_addr.sin_addr.s_addr;
+ t5_req->opt0 = cpu_to_be64(opt0);
+ t5_req->params = cpu_to_be64(V_FILTER_TUPLE(
+ select_ntuple(ep->com.dev, ep->dst, ep->l2t)));
+ t5_req->opt2 = cpu_to_be32(opt2);
+ }
+
set_bit(ACT_OPEN_REQ, &ep->com.history);
return c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
}
case CPL_ERR_CONN_TIMEDOUT:
break;
case CPL_ERR_TCAM_FULL:
+ dev->rdev.stats.tcam_full++;
if (dev->rdev.lldi.enable_fw_ofld_conn) {
mutex_lock(&dev->rdev.stats.lock);
- dev->rdev.stats.tcam_full++;
mutex_unlock(&dev->rdev.stats.lock);
send_fw_act_open_req(ep,
GET_TID_TID(GET_AOPEN_ATID(
static void build_cpl_pass_accept_req(struct sk_buff *skb, int stid , u8 tos)
{
u32 l2info;
- u16 vlantag, len, hdr_len;
+ u16 vlantag, len, hdr_len, eth_hdr_len;
u8 intf;
struct cpl_rx_pkt *cpl = cplhdr(skb);
struct cpl_pass_accept_req *req;
struct tcp_options_received tmp_opt;
+ struct c4iw_dev *dev;
+ dev = *((struct c4iw_dev **) (skb->cb + sizeof(void *)));
/* Store values from cpl_rx_pkt in temporary location. */
vlantag = (__force u16) cpl->vlan;
len = (__force u16) cpl->len;
*/
memset(&tmp_opt, 0, sizeof(tmp_opt));
tcp_clear_options(&tmp_opt);
- tcp_parse_options(skb, &tmp_opt, NULL, 0, NULL);
+ tcp_parse_options(skb, &tmp_opt, 0, NULL);
req = (struct cpl_pass_accept_req *)__skb_push(skb, sizeof(*req));
memset(req, 0, sizeof(*req));
V_SYN_MAC_IDX(G_RX_MACIDX(
(__force int) htonl(l2info))) |
F_SYN_XACT_MATCH);
+ eth_hdr_len = is_t4(dev->rdev.lldi.adapter_type) ?
+ G_RX_ETHHDR_LEN((__force int) htonl(l2info)) :
+ G_RX_T5_ETHHDR_LEN((__force int) htonl(l2info));
req->hdr_len = cpu_to_be32(V_SYN_RX_CHAN(G_RX_CHAN(
(__force int) htonl(l2info))) |
V_TCP_HDR_LEN(G_RX_TCPHDR_LEN(
(__force int) htons(hdr_len))) |
V_IP_HDR_LEN(G_RX_IPHDR_LEN(
(__force int) htons(hdr_len))) |
- V_ETH_HDR_LEN(G_RX_ETHHDR_LEN(
- (__force int) htonl(l2info))));
+ V_ETH_HDR_LEN(G_RX_ETHHDR_LEN(eth_hdr_len)));
req->vlan = (__force __be16) vlantag;
req->len = (__force __be16) len;
req->tos_stid = cpu_to_be32(PASS_OPEN_TID(stid) |
u16 window;
struct port_info *pi;
struct net_device *pdev;
- u16 rss_qid;
+ u16 rss_qid, eth_hdr_len;
int step;
u32 tx_chan;
struct neighbour *neigh;
goto reject;
}
- if (G_RX_ETHHDR_LEN(ntohl(cpl->l2info)) == ETH_HLEN) {
+ eth_hdr_len = is_t4(dev->rdev.lldi.adapter_type) ?
+ G_RX_ETHHDR_LEN(htonl(cpl->l2info)) :
+ G_RX_T5_ETHHDR_LEN(htonl(cpl->l2info));
+ if (eth_hdr_len == ETH_HLEN) {
eh = (struct ethhdr *)(req + 1);
iph = (struct iphdr *)(eh + 1);
} else {
#define DRV_VERSION "0.1"
MODULE_AUTHOR("Steve Wise");
-MODULE_DESCRIPTION("Chelsio T4 RDMA Driver");
+MODULE_DESCRIPTION("Chelsio T4/T5 RDMA Driver");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(DRV_VERSION);
+static int allow_db_fc_on_t5;
+module_param(allow_db_fc_on_t5, int, 0644);
+MODULE_PARM_DESC(allow_db_fc_on_t5,
+ "Allow DB Flow Control on T5 (default = 0)");
+
+static int allow_db_coalescing_on_t5;
+module_param(allow_db_coalescing_on_t5, int, 0644);
+MODULE_PARM_DESC(allow_db_coalescing_on_t5,
+ "Allow DB Coalescing on T5 (default = 0)");
+
struct uld_ctx {
struct list_head entry;
struct cxgb4_lld_info lldi;
{
return infop->vr->stag.size > 0 && infop->vr->pbl.size > 0 &&
infop->vr->rq.size > 0 && infop->vr->qp.size > 0 &&
- infop->vr->cq.size > 0 && infop->vr->ocq.size > 0;
+ infop->vr->cq.size > 0;
}
static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop)
pci_name(infop->pdev));
return ERR_PTR(-ENOSYS);
}
+ if (!ocqp_supported(infop))
+ pr_info("%s: On-Chip Queues not supported on this device.\n",
+ pci_name(infop->pdev));
+
+ if (!is_t4(infop->adapter_type)) {
+ if (!allow_db_fc_on_t5) {
+ db_fc_threshold = 100000;
+ pr_info("DB Flow Control Disabled.\n");
+ }
+
+ if (!allow_db_coalescing_on_t5) {
+ db_coalescing_threshold = -1;
+ pr_info("DB Coalescing Disabled.\n");
+ }
+ }
+
devp = (struct c4iw_dev *)ib_alloc_device(sizeof(*devp));
if (!devp) {
printk(KERN_ERR MOD "Cannot allocate ib device\n");
int i;
if (!vers_printed++)
- printk(KERN_INFO MOD "Chelsio T4 RDMA Driver - version %s\n",
- DRV_VERSION);
+ pr_info("Chelsio T4/T5 RDMA Driver - version %s\n",
+ DRV_VERSION);
ctx = kzalloc(sizeof *ctx, GFP_KERNEL);
if (!ctx) {
return min((int)T4_MAX_NUM_STAG, (int)(rdev->lldi.vr->stag.size >> 5));
}
-#define C4IW_WR_TO (10*HZ)
+#define C4IW_WR_TO (30*HZ)
struct c4iw_wr_wait {
struct completion completion;
DEFINE_DMA_UNMAP_ADDR(mapping);
dma_addr_t dma_addr;
struct c4iw_dev *dev;
- int size;
};
static inline struct c4iw_fr_page_list *to_c4iw_fr_page_list(
return wscale;
}
+static inline int ocqp_supported(const struct cxgb4_lld_info *infop)
+{
+#if defined(__i386__) || defined(__x86_64__) || defined(CONFIG_PPC64)
+ return infop->vr->ocq.size > 0;
+#else
+ return 0;
+#endif
+}
+
u32 c4iw_id_alloc(struct c4iw_id_table *alloc);
void c4iw_id_free(struct c4iw_id_table *alloc, u32 obj);
int c4iw_id_table_alloc(struct c4iw_id_table *alloc, u32 start, u32 num,
extern c4iw_handler_func c4iw_handlers[NUM_CPL_CMDS];
extern int c4iw_max_read_depth;
extern int db_fc_threshold;
+extern int db_coalescing_threshold;
+extern int use_dsgl;
#endif
* SOFTWARE.
*/
+#include <linux/module.h>
+#include <linux/moduleparam.h>
#include <rdma/ib_umem.h>
#include <linux/atomic.h>
#include "iw_cxgb4.h"
+int use_dsgl = 1;
+module_param(use_dsgl, int, 0644);
+MODULE_PARM_DESC(use_dsgl, "Use DSGL for PBL/FastReg (default=1)");
+
#define T4_ULPTX_MIN_IO 32
#define C4IW_MAX_INLINE_SIZE 96
+#define T4_ULPTX_MAX_DMA 1024
+#define C4IW_INLINE_THRESHOLD 128
-static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
- void *data)
+static int inline_threshold = C4IW_INLINE_THRESHOLD;
+module_param(inline_threshold, int, 0644);
+MODULE_PARM_DESC(inline_threshold, "inline vs dsgl threshold (default=128)");
+
+static int _c4iw_write_mem_dma_aligned(struct c4iw_rdev *rdev, u32 addr,
+ u32 len, dma_addr_t data, int wait)
+{
+ struct sk_buff *skb;
+ struct ulp_mem_io *req;
+ struct ulptx_sgl *sgl;
+ u8 wr_len;
+ int ret = 0;
+ struct c4iw_wr_wait wr_wait;
+
+ addr &= 0x7FFFFFF;
+
+ if (wait)
+ c4iw_init_wr_wait(&wr_wait);
+ wr_len = roundup(sizeof(*req) + sizeof(*sgl), 16);
+
+ skb = alloc_skb(wr_len, GFP_KERNEL | __GFP_NOFAIL);
+ if (!skb)
+ return -ENOMEM;
+ set_wr_txq(skb, CPL_PRIORITY_CONTROL, 0);
+
+ req = (struct ulp_mem_io *)__skb_put(skb, wr_len);
+ memset(req, 0, wr_len);
+ INIT_ULPTX_WR(req, wr_len, 0, 0);
+ req->wr.wr_hi = cpu_to_be32(FW_WR_OP(FW_ULPTX_WR) |
+ (wait ? FW_WR_COMPL(1) : 0));
+ req->wr.wr_lo = wait ? (__force __be64)&wr_wait : 0;
+ req->wr.wr_mid = cpu_to_be32(FW_WR_LEN16(DIV_ROUND_UP(wr_len, 16)));
+ req->cmd = cpu_to_be32(ULPTX_CMD(ULP_TX_MEM_WRITE));
+ req->cmd |= cpu_to_be32(V_T5_ULP_MEMIO_ORDER(1));
+ req->dlen = cpu_to_be32(ULP_MEMIO_DATA_LEN(len>>5));
+ req->len16 = cpu_to_be32(DIV_ROUND_UP(wr_len-sizeof(req->wr), 16));
+ req->lock_addr = cpu_to_be32(ULP_MEMIO_ADDR(addr));
+
+ sgl = (struct ulptx_sgl *)(req + 1);
+ sgl->cmd_nsge = cpu_to_be32(ULPTX_CMD(ULP_TX_SC_DSGL) |
+ ULPTX_NSGE(1));
+ sgl->len0 = cpu_to_be32(len);
+ sgl->addr0 = cpu_to_be64(data);
+
+ ret = c4iw_ofld_send(rdev, skb);
+ if (ret)
+ return ret;
+ if (wait)
+ ret = c4iw_wait_for_reply(rdev, &wr_wait, 0, 0, __func__);
+ return ret;
+}
+
+static int _c4iw_write_mem_inline(struct c4iw_rdev *rdev, u32 addr, u32 len,
+ void *data)
{
struct sk_buff *skb;
struct ulp_mem_io *req;
u8 wr_len, *to_dp, *from_dp;
int copy_len, num_wqe, i, ret = 0;
struct c4iw_wr_wait wr_wait;
+ __be32 cmd = cpu_to_be32(ULPTX_CMD(ULP_TX_MEM_WRITE));
+
+ if (is_t4(rdev->lldi.adapter_type))
+ cmd |= cpu_to_be32(ULP_MEMIO_ORDER(1));
+ else
+ cmd |= cpu_to_be32(V_T5_ULP_MEMIO_IMM(1));
addr &= 0x7FFFFFF;
PDBG("%s addr 0x%x len %u\n", __func__, addr, len);
req->wr.wr_mid = cpu_to_be32(
FW_WR_LEN16(DIV_ROUND_UP(wr_len, 16)));
- req->cmd = cpu_to_be32(ULPTX_CMD(ULP_TX_MEM_WRITE) | (1<<23));
+ req->cmd = cmd;
req->dlen = cpu_to_be32(ULP_MEMIO_DATA_LEN(
DIV_ROUND_UP(copy_len, T4_ULPTX_MIN_IO)));
req->len16 = cpu_to_be32(DIV_ROUND_UP(wr_len-sizeof(req->wr),
return ret;
}
+int _c4iw_write_mem_dma(struct c4iw_rdev *rdev, u32 addr, u32 len, void *data)
+{
+ u32 remain = len;
+ u32 dmalen;
+ int ret = 0;
+ dma_addr_t daddr;
+ dma_addr_t save;
+
+ daddr = dma_map_single(&rdev->lldi.pdev->dev, data, len, DMA_TO_DEVICE);
+ if (dma_mapping_error(&rdev->lldi.pdev->dev, daddr))
+ return -1;
+ save = daddr;
+
+ while (remain > inline_threshold) {
+ if (remain < T4_ULPTX_MAX_DMA) {
+ if (remain & ~T4_ULPTX_MIN_IO)
+ dmalen = remain & ~(T4_ULPTX_MIN_IO-1);
+ else
+ dmalen = remain;
+ } else
+ dmalen = T4_ULPTX_MAX_DMA;
+ remain -= dmalen;
+ ret = _c4iw_write_mem_dma_aligned(rdev, addr, dmalen, daddr,
+ !remain);
+ if (ret)
+ goto out;
+ addr += dmalen >> 5;
+ data += dmalen;
+ daddr += dmalen;
+ }
+ if (remain)
+ ret = _c4iw_write_mem_inline(rdev, addr, remain, data);
+out:
+ dma_unmap_single(&rdev->lldi.pdev->dev, save, len, DMA_TO_DEVICE);
+ return ret;
+}
+
+/*
+ * write len bytes of data into addr (32B aligned address)
+ * If data is NULL, clear len byte of memory to zero.
+ */
+static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
+ void *data)
+{
+ if (is_t5(rdev->lldi.adapter_type) && use_dsgl) {
+ if (len > inline_threshold) {
+ if (_c4iw_write_mem_dma(rdev, addr, len, data)) {
+ printk_ratelimited(KERN_WARNING
+ "%s: dma map"
+ " failure (non fatal)\n",
+ pci_name(rdev->lldi.pdev));
+ return _c4iw_write_mem_inline(rdev, addr, len,
+ data);
+ } else
+ return 0;
+ } else
+ return _c4iw_write_mem_inline(rdev, addr, len, data);
+ } else
+ return _c4iw_write_mem_inline(rdev, addr, len, data);
+}
+
/*
* Build and write a TPT entry.
* IN: stag key, pdid, perm, bind_enabled, zbva, to, len, page_size,
struct c4iw_fr_page_list *c4pl;
struct c4iw_dev *dev = to_c4iw_dev(device);
dma_addr_t dma_addr;
- int size = sizeof *c4pl + page_list_len * sizeof(u64);
+ int pll_len = roundup(page_list_len * sizeof(u64), 32);
- c4pl = dma_alloc_coherent(&dev->rdev.lldi.pdev->dev, size,
- &dma_addr, GFP_KERNEL);
+ c4pl = kmalloc(sizeof(*c4pl), GFP_KERNEL);
if (!c4pl)
return ERR_PTR(-ENOMEM);
+ c4pl->ibpl.page_list = dma_alloc_coherent(&dev->rdev.lldi.pdev->dev,
+ pll_len, &dma_addr,
+ GFP_KERNEL);
+ if (!c4pl->ibpl.page_list) {
+ kfree(c4pl);
+ return ERR_PTR(-ENOMEM);
+ }
dma_unmap_addr_set(c4pl, mapping, dma_addr);
c4pl->dma_addr = dma_addr;
c4pl->dev = dev;
- c4pl->size = size;
- c4pl->ibpl.page_list = (u64 *)(c4pl + 1);
- c4pl->ibpl.max_page_list_len = page_list_len;
+ c4pl->ibpl.max_page_list_len = pll_len;
return &c4pl->ibpl;
}
{
struct c4iw_fr_page_list *c4pl = to_c4iw_fr_page_list(ibpl);
- dma_free_coherent(&c4pl->dev->rdev.lldi.pdev->dev, c4pl->size,
- c4pl, dma_unmap_addr(c4pl, mapping));
+ dma_free_coherent(&c4pl->dev->rdev.lldi.pdev->dev,
+ c4pl->ibpl.max_page_list_len,
+ c4pl->ibpl.page_list, dma_unmap_addr(c4pl, mapping));
+ kfree(c4pl);
}
int c4iw_dereg_mr(struct ib_mr *ib_mr)
*/
if (addr >= rdev->oc_mw_pa)
vma->vm_page_prot = t4_pgprot_wc(vma->vm_page_prot);
- else
- vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ else {
+ if (is_t5(rdev->lldi.adapter_type))
+ vma->vm_page_prot =
+ t4_pgprot_wc(vma->vm_page_prot);
+ else
+ vma->vm_page_prot =
+ pgprot_noncached(vma->vm_page_prot);
+ }
ret = io_remap_pfn_range(vma, vma->vm_start,
addr >> PAGE_SHIFT,
len, vma->vm_page_prot);
dev = to_c4iw_dev(ibdev);
memset(props, 0, sizeof *props);
memcpy(&props->sys_image_guid, dev->rdev.lldi.ports[0]->dev_addr, 6);
- props->hw_ver = dev->rdev.lldi.adapter_type;
+ props->hw_ver = CHELSIO_CHIP_RELEASE(dev->rdev.lldi.adapter_type);
props->fw_ver = dev->rdev.lldi.fw_vers;
props->device_cap_flags = dev->device_cap_flags;
props->page_size_cap = T4_PAGESIZE_MASK;
struct c4iw_dev *c4iw_dev = container_of(dev, struct c4iw_dev,
ibdev.dev);
PDBG("%s dev 0x%p\n", __func__, dev);
- return sprintf(buf, "%d\n", c4iw_dev->rdev.lldi.adapter_type);
+ return sprintf(buf, "%d\n",
+ CHELSIO_CHIP_RELEASE(c4iw_dev->rdev.lldi.adapter_type));
}
static ssize_t show_fw_ver(struct device *dev, struct device_attribute *attr,
module_param(ocqp_support, int, 0644);
MODULE_PARM_DESC(ocqp_support, "Support on-chip SQs (default=1)");
-int db_fc_threshold = 2000;
+int db_fc_threshold = 1000;
module_param(db_fc_threshold, int, 0644);
-MODULE_PARM_DESC(db_fc_threshold, "QP count/threshold that triggers automatic "
- "db flow control mode (default = 2000)");
+MODULE_PARM_DESC(db_fc_threshold,
+ "QP count/threshold that triggers"
+ " automatic db flow control mode (default = 1000)");
+
+int db_coalescing_threshold;
+module_param(db_coalescing_threshold, int, 0644);
+MODULE_PARM_DESC(db_coalescing_threshold,
+ "QP count/threshold that triggers"
+ " disabling db coalescing (default = 0)");
+
+static int max_fr_immd = T4_MAX_FR_IMMD;
+module_param(max_fr_immd, int, 0644);
+MODULE_PARM_DESC(max_fr_immd, "fastreg threshold for using DSGL instead of immedate");
static void set_state(struct c4iw_qp *qhp, enum c4iw_qp_state state)
{
static int alloc_oc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
{
- if (!ocqp_support || !t4_ocqp_supported())
+ if (!ocqp_support || !ocqp_supported(&rdev->lldi))
return -ENOSYS;
sq->dma_addr = c4iw_ocqp_pool_alloc(rdev, sq->memsize);
if (!sq->dma_addr)
int wr_len;
struct c4iw_wr_wait wr_wait;
struct sk_buff *skb;
- int ret;
+ int ret = 0;
int eqsize;
wq->sq.qid = c4iw_get_qpid(rdev, uctx);
}
if (user) {
- ret = alloc_oc_sq(rdev, &wq->sq);
- if (ret)
+ if (alloc_oc_sq(rdev, &wq->sq) && alloc_host_sq(rdev, &wq->sq))
goto free_hwaddr;
-
- ret = alloc_host_sq(rdev, &wq->sq);
- if (ret)
- goto free_sq;
- } else
+ } else {
ret = alloc_host_sq(rdev, &wq->sq);
if (ret)
goto free_hwaddr;
+ }
+
memset(wq->sq.queue, 0, wq->sq.memsize);
dma_unmap_addr_set(&wq->sq, mapping, wq->sq.dma_addr);
}
static int build_fastreg(struct t4_sq *sq, union t4_wr *wqe,
- struct ib_send_wr *wr, u8 *len16)
+ struct ib_send_wr *wr, u8 *len16, u8 t5dev)
{
struct fw_ri_immd *imdp;
wqe->fr.va_hi = cpu_to_be32(wr->wr.fast_reg.iova_start >> 32);
wqe->fr.va_lo_fbo = cpu_to_be32(wr->wr.fast_reg.iova_start &
0xffffffff);
- WARN_ON(pbllen > T4_MAX_FR_IMMD);
- imdp = (struct fw_ri_immd *)(&wqe->fr + 1);
- imdp->op = FW_RI_DATA_IMMD;
- imdp->r1 = 0;
- imdp->r2 = 0;
- imdp->immdlen = cpu_to_be32(pbllen);
- p = (__be64 *)(imdp + 1);
- rem = pbllen;
- for (i = 0; i < wr->wr.fast_reg.page_list_len; i++) {
- *p = cpu_to_be64((u64)wr->wr.fast_reg.page_list->page_list[i]);
- rem -= sizeof *p;
- if (++p == (__be64 *)&sq->queue[sq->size])
- p = (__be64 *)sq->queue;
- }
- BUG_ON(rem < 0);
- while (rem) {
- *p = 0;
- rem -= sizeof *p;
- if (++p == (__be64 *)&sq->queue[sq->size])
- p = (__be64 *)sq->queue;
+
+ if (t5dev && use_dsgl && (pbllen > max_fr_immd)) {
+ struct c4iw_fr_page_list *c4pl =
+ to_c4iw_fr_page_list(wr->wr.fast_reg.page_list);
+ struct fw_ri_dsgl *sglp;
+
+ for (i = 0; i < wr->wr.fast_reg.page_list_len; i++) {
+ wr->wr.fast_reg.page_list->page_list[i] = (__force u64)
+ cpu_to_be64((u64)
+ wr->wr.fast_reg.page_list->page_list[i]);
+ }
+
+ sglp = (struct fw_ri_dsgl *)(&wqe->fr + 1);
+ sglp->op = FW_RI_DATA_DSGL;
+ sglp->r1 = 0;
+ sglp->nsge = cpu_to_be16(1);
+ sglp->addr0 = cpu_to_be64(c4pl->dma_addr);
+ sglp->len0 = cpu_to_be32(pbllen);
+
+ *len16 = DIV_ROUND_UP(sizeof(wqe->fr) + sizeof(*sglp), 16);
+ } else {
+ imdp = (struct fw_ri_immd *)(&wqe->fr + 1);
+ imdp->op = FW_RI_DATA_IMMD;
+ imdp->r1 = 0;
+ imdp->r2 = 0;
+ imdp->immdlen = cpu_to_be32(pbllen);
+ p = (__be64 *)(imdp + 1);
+ rem = pbllen;
+ for (i = 0; i < wr->wr.fast_reg.page_list_len; i++) {
+ *p = cpu_to_be64(
+ (u64)wr->wr.fast_reg.page_list->page_list[i]);
+ rem -= sizeof(*p);
+ if (++p == (__be64 *)&sq->queue[sq->size])
+ p = (__be64 *)sq->queue;
+ }
+ BUG_ON(rem < 0);
+ while (rem) {
+ *p = 0;
+ rem -= sizeof(*p);
+ if (++p == (__be64 *)&sq->queue[sq->size])
+ p = (__be64 *)sq->queue;
+ }
+ *len16 = DIV_ROUND_UP(sizeof(wqe->fr) + sizeof(*imdp)
+ + pbllen, 16);
}
- *len16 = DIV_ROUND_UP(sizeof wqe->fr + sizeof *imdp + pbllen, 16);
return 0;
}
case IB_WR_FAST_REG_MR:
fw_opcode = FW_RI_FR_NSMR_WR;
swsqe->opcode = FW_RI_FAST_REGISTER;
- err = build_fastreg(&qhp->wq.sq, wqe, wr, &len16);
+ err = build_fastreg(&qhp->wq.sq, wqe, wr, &len16,
+ is_t5(
+ qhp->rhp->rdev.lldi.adapter_type) ?
+ 1 : 0);
break;
case IB_WR_LOCAL_INV:
if (wr->send_flags & IB_SEND_FENCE)
rhp->db_state = NORMAL;
idr_for_each(&rhp->qpidr, enable_qp_db, NULL);
}
+ if (db_coalescing_threshold >= 0)
+ if (rhp->qpcnt <= db_coalescing_threshold)
+ cxgb4_enable_db_coalescing(rhp->rdev.lldi.ports[0]);
spin_unlock_irq(&rhp->lock);
atomic_dec(&qhp->refcnt);
wait_event(qhp->wait, !atomic_read(&qhp->refcnt));
spin_lock_irq(&rhp->lock);
if (rhp->db_state != NORMAL)
t4_disable_wq_db(&qhp->wq);
- if (++rhp->qpcnt > db_fc_threshold && rhp->db_state == NORMAL) {
+ rhp->qpcnt++;
+ if (rhp->qpcnt > db_fc_threshold && rhp->db_state == NORMAL) {
rhp->rdev.stats.db_state_transitions++;
rhp->db_state = FLOW_CONTROL;
idr_for_each(&rhp->qpidr, disable_qp_db, NULL);
}
+ if (db_coalescing_threshold >= 0)
+ if (rhp->qpcnt > db_coalescing_threshold)
+ cxgb4_disable_db_coalescing(rhp->rdev.lldi.ports[0]);
ret = insert_handle_nolock(rhp, &rhp->qpidr, qhp, qhp->wq.sq.qid);
spin_unlock_irq(&rhp->lock);
if (ret)
sizeof(struct fw_ri_isgl)) / sizeof(struct fw_ri_sge))
#define T4_MAX_FR_IMMD ((T4_SQ_NUM_BYTES - sizeof(struct fw_ri_fr_nsmr_wr) - \
sizeof(struct fw_ri_immd)) & ~31UL)
-#define T4_MAX_FR_DEPTH (T4_MAX_FR_IMMD / sizeof(u64))
+#define T4_MAX_FR_DEPTH (1024 / sizeof(u64))
#define T4_RQ_NUM_SLOTS 2
#define T4_RQ_NUM_BYTES (T4_EQ_ENTRY_SIZE * T4_RQ_NUM_SLOTS)
#endif
}
-static inline int t4_ocqp_supported(void)
-{
-#if defined(__i386__) || defined(__x86_64__) || defined(CONFIG_PPC64)
- return 1;
-#else
- return 0;
-#endif
-}
-
enum {
T4_SQ_ONCHIP = (1<<0),
};
{
struct ncci_datahandle_queue *n, **pp;
- n = (struct ncci_datahandle_queue *)
- kmalloc(sizeof(struct ncci_datahandle_queue), GFP_ATOMIC);
+ n = kmalloc(sizeof(struct ncci_datahandle_queue), GFP_ATOMIC);
if (!n) {
printk(KERN_ERR "capidrv: kmalloc ncci_datahandle failed\n");
return -1;
{
int i;
- fsm->jumpmatrix = (FSMFNPTR *)
+ fsm->jumpmatrix =
kzalloc(sizeof(FSMFNPTR) * fsm->state_count * fsm->event_count, GFP_KERNEL);
if (!fsm->jumpmatrix)
return -ENOMEM;
release_region(cs->hw.hfcsx.base, 2);
return (0);
}
- if (!(cs->hw.hfcsx.extra = (void *)
+ if (!(cs->hw.hfcsx.extra =
kmalloc(sizeof(struct hfcsx_extra), GFP_ATOMIC))) {
release_region(cs->hw.hfcsx.base, 2);
printk(KERN_WARNING "HFC-SX: unable to allocate memory\n");
config VXLAN
tristate "Virtual eXtensible Local Area Network (VXLAN)"
depends on INET
+ select NET_IP_TUNNEL
---help---
This allows one to create vxlan virtual interfaces that provide
Layer 2 Networks over Layer 3 Networks. VXLAN is often used
IP packets inside AppleTalk frames; this is useful if your Linux box
is stuck on an AppleTalk network (which hopefully contains a
decapsulator somewhere). Please see
- <file:Documentation/networking/ipddp.txt> for more information. If
- you said Y to "AppleTalk-IP driver support" above and you say Y
- here, then you cannot say Y to "AppleTalk-IP to IP Decapsulation
- support", below.
-
-config IPDDP_DECAP
- bool "Appletalk-IP to IP Decapsulation support"
- depends on IPDDP
- help
- If you say Y here, the AppleTalk-IP code will be able to decapsulate
- AppleTalk-IP frames to IP packets; this is useful if you want your
- Linux box to act as an Internet gateway for an AppleTalk network.
- Please see <file:Documentation/networking/ipddp.txt> for more
- information. If you said Y to "AppleTalk-IP driver support" above
- and you say Y here, then you cannot say Y to "IP to AppleTalk-IP
- Encapsulation support", above.
-
+ <file:Documentation/networking/ipddp.txt> for more information.
{
struct bonding *bond = container_of(work, struct bonding,
mcast_work.work);
- rcu_read_lock();
+
bond_resend_igmp_join_requests(bond);
- rcu_read_unlock();
}
/*
help to synchronize to the next transfer in case of over or under-runs.
This option also needs to be enabled on the modem.
-config CAIF_SHM
- tristate "CAIF shared memory protocol driver"
- depends on CAIF && U5500_MBOX
- default n
- ---help---
- The CAIF shared memory protocol driver for the STE UX5500 platform.
-
config CAIF_HSI
tristate "CAIF HSI transport driver"
depends on CAIF
cfspi_slave-objs := caif_spi.o caif_spi_slave.o
obj-$(CONFIG_CAIF_SPI_SLAVE) += cfspi_slave.o
-# Shared memory
-caif_shm-objs := caif_shmcore.o caif_shm_u5500.o
-obj-$(CONFIG_CAIF_SHM) += caif_shm.o
-
# HSI interface
obj-$(CONFIG_CAIF_HSI) += caif_hsi.o
+++ /dev/null
-/*
- * Copyright (C) ST-Ericsson AB 2010
- * Contact: Sjur Brendeland / sjur.brandeland@stericsson.com
- * Author: Amarnath Revanna / amarnath.bangalore.revanna@stericsson.com
- * License terms: GNU General Public License (GPL) version 2
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ":" fmt
-
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <mach/mbox-db5500.h>
-#include <net/caif/caif_shm.h>
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("CAIF Shared Memory protocol driver");
-
-#define MAX_SHM_INSTANCES 1
-
-enum {
- MBX_ACC0,
- MBX_ACC1,
- MBX_DSP
-};
-
-static struct shmdev_layer shmdev_lyr[MAX_SHM_INSTANCES];
-
-static unsigned int shm_start;
-static unsigned int shm_size;
-
-module_param(shm_size, uint , 0440);
-MODULE_PARM_DESC(shm_total_size, "Start of SHM shared memory");
-
-module_param(shm_start, uint , 0440);
-MODULE_PARM_DESC(shm_total_start, "Total Size of SHM shared memory");
-
-static int shmdev_send_msg(u32 dev_id, u32 mbx_msg)
-{
- /* Always block until msg is written successfully */
- mbox_send(shmdev_lyr[dev_id].hmbx, mbx_msg, true);
- return 0;
-}
-
-static int shmdev_mbx_setup(void *pshmdrv_cb, struct shmdev_layer *pshm_dev,
- void *pshm_drv)
-{
- /*
- * For UX5500, we have only 1 SHM instance which uses MBX0
- * for communication with the peer modem
- */
- pshm_dev->hmbx = mbox_setup(MBX_ACC0, pshmdrv_cb, pshm_drv);
-
- if (!pshm_dev->hmbx)
- return -ENODEV;
- else
- return 0;
-}
-
-static int __init caif_shmdev_init(void)
-{
- int i, result;
-
- /* Loop is currently overkill, there is only one instance */
- for (i = 0; i < MAX_SHM_INSTANCES; i++) {
-
- shmdev_lyr[i].shm_base_addr = shm_start;
- shmdev_lyr[i].shm_total_sz = shm_size;
-
- if (((char *)shmdev_lyr[i].shm_base_addr == NULL)
- || (shmdev_lyr[i].shm_total_sz <= 0)) {
- pr_warn("ERROR,"
- "Shared memory Address and/or Size incorrect"
- ", Bailing out ...\n");
- result = -EINVAL;
- goto clean;
- }
-
- pr_info("SHM AREA (instance %d) STARTS"
- " AT %p\n", i, (char *)shmdev_lyr[i].shm_base_addr);
-
- shmdev_lyr[i].shm_id = i;
- shmdev_lyr[i].pshmdev_mbxsend = shmdev_send_msg;
- shmdev_lyr[i].pshmdev_mbxsetup = shmdev_mbx_setup;
-
- /*
- * Finally, CAIF core module is called with details in place:
- * 1. SHM base address
- * 2. SHM size
- * 3. MBX handle
- */
- result = caif_shmcore_probe(&shmdev_lyr[i]);
- if (result) {
- pr_warn("ERROR[%d],"
- "Could not probe SHM core (instance %d)"
- " Bailing out ...\n", result, i);
- goto clean;
- }
- }
-
- return 0;
-
-clean:
- /*
- * For now, we assume that even if one instance of SHM fails, we bail
- * out of the driver support completely. For this, we need to release
- * any memory allocated and unregister any instance of SHM net device.
- */
- for (i = 0; i < MAX_SHM_INSTANCES; i++) {
- if (shmdev_lyr[i].pshm_netdev)
- unregister_netdev(shmdev_lyr[i].pshm_netdev);
- }
- return result;
-}
-
-static void __exit caif_shmdev_exit(void)
-{
- int i;
-
- for (i = 0; i < MAX_SHM_INSTANCES; i++) {
- caif_shmcore_remove(shmdev_lyr[i].pshm_netdev);
- kfree((void *)shmdev_lyr[i].shm_base_addr);
- }
-
-}
-
-module_init(caif_shmdev_init);
-module_exit(caif_shmdev_exit);
+++ /dev/null
-/*
- * Copyright (C) ST-Ericsson AB 2010
- * Contact: Sjur Brendeland / sjur.brandeland@stericsson.com
- * Authors: Amarnath Revanna / amarnath.bangalore.revanna@stericsson.com,
- * Daniel Martensson / daniel.martensson@stericsson.com
- * License terms: GNU General Public License (GPL) version 2
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ":" fmt
-
-#include <linux/spinlock.h>
-#include <linux/sched.h>
-#include <linux/list.h>
-#include <linux/netdevice.h>
-#include <linux/if_arp.h>
-#include <linux/io.h>
-
-#include <net/caif/caif_device.h>
-#include <net/caif/caif_shm.h>
-
-#define NR_TX_BUF 6
-#define NR_RX_BUF 6
-#define TX_BUF_SZ 0x2000
-#define RX_BUF_SZ 0x2000
-
-#define CAIF_NEEDED_HEADROOM 32
-
-#define CAIF_FLOW_ON 1
-#define CAIF_FLOW_OFF 0
-
-#define LOW_WATERMARK 3
-#define HIGH_WATERMARK 4
-
-/* Maximum number of CAIF buffers per shared memory buffer. */
-#define SHM_MAX_FRMS_PER_BUF 10
-
-/*
- * Size in bytes of the descriptor area
- * (With end of descriptor signalling)
- */
-#define SHM_CAIF_DESC_SIZE ((SHM_MAX_FRMS_PER_BUF + 1) * \
- sizeof(struct shm_pck_desc))
-
-/*
- * Offset to the first CAIF frame within a shared memory buffer.
- * Aligned on 32 bytes.
- */
-#define SHM_CAIF_FRM_OFS (SHM_CAIF_DESC_SIZE + (SHM_CAIF_DESC_SIZE % 32))
-
-/* Number of bytes for CAIF shared memory header. */
-#define SHM_HDR_LEN 1
-
-/* Number of padding bytes for the complete CAIF frame. */
-#define SHM_FRM_PAD_LEN 4
-
-#define CAIF_MAX_MTU 4096
-
-#define SHM_SET_FULL(x) (((x+1) & 0x0F) << 0)
-#define SHM_GET_FULL(x) (((x >> 0) & 0x0F) - 1)
-
-#define SHM_SET_EMPTY(x) (((x+1) & 0x0F) << 4)
-#define SHM_GET_EMPTY(x) (((x >> 4) & 0x0F) - 1)
-
-#define SHM_FULL_MASK (0x0F << 0)
-#define SHM_EMPTY_MASK (0x0F << 4)
-
-struct shm_pck_desc {
- /*
- * Offset from start of shared memory area to start of
- * shared memory CAIF frame.
- */
- u32 frm_ofs;
- u32 frm_len;
-};
-
-struct buf_list {
- unsigned char *desc_vptr;
- u32 phy_addr;
- u32 index;
- u32 len;
- u32 frames;
- u32 frm_ofs;
- struct list_head list;
-};
-
-struct shm_caif_frm {
- /* Number of bytes of padding before the CAIF frame. */
- u8 hdr_ofs;
-};
-
-struct shmdrv_layer {
- /* caif_dev_common must always be first in the structure*/
- struct caif_dev_common cfdev;
-
- u32 shm_tx_addr;
- u32 shm_rx_addr;
- u32 shm_base_addr;
- u32 tx_empty_available;
- spinlock_t lock;
-
- struct list_head tx_empty_list;
- struct list_head tx_pend_list;
- struct list_head tx_full_list;
- struct list_head rx_empty_list;
- struct list_head rx_pend_list;
- struct list_head rx_full_list;
-
- struct workqueue_struct *pshm_tx_workqueue;
- struct workqueue_struct *pshm_rx_workqueue;
-
- struct work_struct shm_tx_work;
- struct work_struct shm_rx_work;
-
- struct sk_buff_head sk_qhead;
- struct shmdev_layer *pshm_dev;
-};
-
-static int shm_netdev_open(struct net_device *shm_netdev)
-{
- netif_wake_queue(shm_netdev);
- return 0;
-}
-
-static int shm_netdev_close(struct net_device *shm_netdev)
-{
- netif_stop_queue(shm_netdev);
- return 0;
-}
-
-int caif_shmdrv_rx_cb(u32 mbx_msg, void *priv)
-{
- struct buf_list *pbuf;
- struct shmdrv_layer *pshm_drv;
- struct list_head *pos;
- u32 avail_emptybuff = 0;
- unsigned long flags = 0;
-
- pshm_drv = priv;
-
- /* Check for received buffers. */
- if (mbx_msg & SHM_FULL_MASK) {
- int idx;
-
- spin_lock_irqsave(&pshm_drv->lock, flags);
-
- /* Check whether we have any outstanding buffers. */
- if (list_empty(&pshm_drv->rx_empty_list)) {
-
- /* Release spin lock. */
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- /* We print even in IRQ context... */
- pr_warn("No empty Rx buffers to fill: "
- "mbx_msg:%x\n", mbx_msg);
-
- /* Bail out. */
- goto err_sync;
- }
-
- pbuf =
- list_entry(pshm_drv->rx_empty_list.next,
- struct buf_list, list);
- idx = pbuf->index;
-
- /* Check buffer synchronization. */
- if (idx != SHM_GET_FULL(mbx_msg)) {
-
- /* We print even in IRQ context... */
- pr_warn(
- "phyif_shm_mbx_msg_cb: RX full out of sync:"
- " idx:%d, msg:%x SHM_GET_FULL(mbx_msg):%x\n",
- idx, mbx_msg, SHM_GET_FULL(mbx_msg));
-
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- /* Bail out. */
- goto err_sync;
- }
-
- list_del_init(&pbuf->list);
- list_add_tail(&pbuf->list, &pshm_drv->rx_full_list);
-
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- /* Schedule RX work queue. */
- if (!work_pending(&pshm_drv->shm_rx_work))
- queue_work(pshm_drv->pshm_rx_workqueue,
- &pshm_drv->shm_rx_work);
- }
-
- /* Check for emptied buffers. */
- if (mbx_msg & SHM_EMPTY_MASK) {
- int idx;
-
- spin_lock_irqsave(&pshm_drv->lock, flags);
-
- /* Check whether we have any outstanding buffers. */
- if (list_empty(&pshm_drv->tx_full_list)) {
-
- /* We print even in IRQ context... */
- pr_warn("No TX to empty: msg:%x\n", mbx_msg);
-
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- /* Bail out. */
- goto err_sync;
- }
-
- pbuf =
- list_entry(pshm_drv->tx_full_list.next,
- struct buf_list, list);
- idx = pbuf->index;
-
- /* Check buffer synchronization. */
- if (idx != SHM_GET_EMPTY(mbx_msg)) {
-
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- /* We print even in IRQ context... */
- pr_warn("TX empty "
- "out of sync:idx:%d, msg:%x\n", idx, mbx_msg);
-
- /* Bail out. */
- goto err_sync;
- }
- list_del_init(&pbuf->list);
-
- /* Reset buffer parameters. */
- pbuf->frames = 0;
- pbuf->frm_ofs = SHM_CAIF_FRM_OFS;
-
- list_add_tail(&pbuf->list, &pshm_drv->tx_empty_list);
-
- /* Check the available no. of buffers in the empty list */
- list_for_each(pos, &pshm_drv->tx_empty_list)
- avail_emptybuff++;
-
- /* Check whether we have to wake up the transmitter. */
- if ((avail_emptybuff > HIGH_WATERMARK) &&
- (!pshm_drv->tx_empty_available)) {
- pshm_drv->tx_empty_available = 1;
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
- pshm_drv->cfdev.flowctrl
- (pshm_drv->pshm_dev->pshm_netdev,
- CAIF_FLOW_ON);
-
-
- /* Schedule the work queue. if required */
- if (!work_pending(&pshm_drv->shm_tx_work))
- queue_work(pshm_drv->pshm_tx_workqueue,
- &pshm_drv->shm_tx_work);
- } else
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
- }
-
- return 0;
-
-err_sync:
- return -EIO;
-}
-
-static void shm_rx_work_func(struct work_struct *rx_work)
-{
- struct shmdrv_layer *pshm_drv;
- struct buf_list *pbuf;
- unsigned long flags = 0;
- struct sk_buff *skb;
- char *p;
- int ret;
-
- pshm_drv = container_of(rx_work, struct shmdrv_layer, shm_rx_work);
-
- while (1) {
-
- struct shm_pck_desc *pck_desc;
-
- spin_lock_irqsave(&pshm_drv->lock, flags);
-
- /* Check for received buffers. */
- if (list_empty(&pshm_drv->rx_full_list)) {
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
- break;
- }
-
- pbuf =
- list_entry(pshm_drv->rx_full_list.next, struct buf_list,
- list);
- list_del_init(&pbuf->list);
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- /* Retrieve pointer to start of the packet descriptor area. */
- pck_desc = (struct shm_pck_desc *) pbuf->desc_vptr;
-
- /*
- * Check whether descriptor contains a CAIF shared memory
- * frame.
- */
- while (pck_desc->frm_ofs) {
- unsigned int frm_buf_ofs;
- unsigned int frm_pck_ofs;
- unsigned int frm_pck_len;
- /*
- * Check whether offset is within buffer limits
- * (lower).
- */
- if (pck_desc->frm_ofs <
- (pbuf->phy_addr - pshm_drv->shm_base_addr))
- break;
- /*
- * Check whether offset is within buffer limits
- * (higher).
- */
- if (pck_desc->frm_ofs >
- ((pbuf->phy_addr - pshm_drv->shm_base_addr) +
- pbuf->len))
- break;
-
- /* Calculate offset from start of buffer. */
- frm_buf_ofs =
- pck_desc->frm_ofs - (pbuf->phy_addr -
- pshm_drv->shm_base_addr);
-
- /*
- * Calculate offset and length of CAIF packet while
- * taking care of the shared memory header.
- */
- frm_pck_ofs =
- frm_buf_ofs + SHM_HDR_LEN +
- (*(pbuf->desc_vptr + frm_buf_ofs));
- frm_pck_len =
- (pck_desc->frm_len - SHM_HDR_LEN -
- (*(pbuf->desc_vptr + frm_buf_ofs)));
-
- /* Check whether CAIF packet is within buffer limits */
- if ((frm_pck_ofs + pck_desc->frm_len) > pbuf->len)
- break;
-
- /* Get a suitable CAIF packet and copy in data. */
- skb = netdev_alloc_skb(pshm_drv->pshm_dev->pshm_netdev,
- frm_pck_len + 1);
-
- if (skb == NULL) {
- pr_info("OOM: Try next frame in descriptor\n");
- break;
- }
-
- p = skb_put(skb, frm_pck_len);
- memcpy(p, pbuf->desc_vptr + frm_pck_ofs, frm_pck_len);
-
- skb->protocol = htons(ETH_P_CAIF);
- skb_reset_mac_header(skb);
- skb->dev = pshm_drv->pshm_dev->pshm_netdev;
-
- /* Push received packet up the stack. */
- ret = netif_rx_ni(skb);
-
- if (!ret) {
- pshm_drv->pshm_dev->pshm_netdev->stats.
- rx_packets++;
- pshm_drv->pshm_dev->pshm_netdev->stats.
- rx_bytes += pck_desc->frm_len;
- } else
- ++pshm_drv->pshm_dev->pshm_netdev->stats.
- rx_dropped;
- /* Move to next packet descriptor. */
- pck_desc++;
- }
-
- spin_lock_irqsave(&pshm_drv->lock, flags);
- list_add_tail(&pbuf->list, &pshm_drv->rx_pend_list);
-
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- }
-
- /* Schedule the work queue. if required */
- if (!work_pending(&pshm_drv->shm_tx_work))
- queue_work(pshm_drv->pshm_tx_workqueue, &pshm_drv->shm_tx_work);
-
-}
-
-static void shm_tx_work_func(struct work_struct *tx_work)
-{
- u32 mbox_msg;
- unsigned int frmlen, avail_emptybuff, append = 0;
- unsigned long flags = 0;
- struct buf_list *pbuf = NULL;
- struct shmdrv_layer *pshm_drv;
- struct shm_caif_frm *frm;
- struct sk_buff *skb;
- struct shm_pck_desc *pck_desc;
- struct list_head *pos;
-
- pshm_drv = container_of(tx_work, struct shmdrv_layer, shm_tx_work);
-
- do {
- /* Initialize mailbox message. */
- mbox_msg = 0x00;
- avail_emptybuff = 0;
-
- spin_lock_irqsave(&pshm_drv->lock, flags);
-
- /* Check for pending receive buffers. */
- if (!list_empty(&pshm_drv->rx_pend_list)) {
-
- pbuf = list_entry(pshm_drv->rx_pend_list.next,
- struct buf_list, list);
-
- list_del_init(&pbuf->list);
- list_add_tail(&pbuf->list, &pshm_drv->rx_empty_list);
- /*
- * Value index is never changed,
- * so read access should be safe.
- */
- mbox_msg |= SHM_SET_EMPTY(pbuf->index);
- }
-
- skb = skb_peek(&pshm_drv->sk_qhead);
-
- if (skb == NULL)
- goto send_msg;
- /* Check the available no. of buffers in the empty list */
- list_for_each(pos, &pshm_drv->tx_empty_list)
- avail_emptybuff++;
-
- if ((avail_emptybuff < LOW_WATERMARK) &&
- pshm_drv->tx_empty_available) {
- /* Update blocking condition. */
- pshm_drv->tx_empty_available = 0;
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
- pshm_drv->cfdev.flowctrl
- (pshm_drv->pshm_dev->pshm_netdev,
- CAIF_FLOW_OFF);
- spin_lock_irqsave(&pshm_drv->lock, flags);
- }
- /*
- * We simply return back to the caller if we do not have space
- * either in Tx pending list or Tx empty list. In this case,
- * we hold the received skb in the skb list, waiting to
- * be transmitted once Tx buffers become available
- */
- if (list_empty(&pshm_drv->tx_empty_list))
- goto send_msg;
-
- /* Get the first free Tx buffer. */
- pbuf = list_entry(pshm_drv->tx_empty_list.next,
- struct buf_list, list);
- do {
- if (append) {
- skb = skb_peek(&pshm_drv->sk_qhead);
- if (skb == NULL)
- break;
- }
-
- frm = (struct shm_caif_frm *)
- (pbuf->desc_vptr + pbuf->frm_ofs);
-
- frm->hdr_ofs = 0;
- frmlen = 0;
- frmlen += SHM_HDR_LEN + frm->hdr_ofs + skb->len;
-
- /* Add tail padding if needed. */
- if (frmlen % SHM_FRM_PAD_LEN)
- frmlen += SHM_FRM_PAD_LEN -
- (frmlen % SHM_FRM_PAD_LEN);
-
- /*
- * Verify that packet, header and additional padding
- * can fit within the buffer frame area.
- */
- if (frmlen >= (pbuf->len - pbuf->frm_ofs))
- break;
-
- if (!append) {
- list_del_init(&pbuf->list);
- append = 1;
- }
-
- skb = skb_dequeue(&pshm_drv->sk_qhead);
- if (skb == NULL)
- break;
- /* Copy in CAIF frame. */
- skb_copy_bits(skb, 0, pbuf->desc_vptr +
- pbuf->frm_ofs + SHM_HDR_LEN +
- frm->hdr_ofs, skb->len);
-
- pshm_drv->pshm_dev->pshm_netdev->stats.tx_packets++;
- pshm_drv->pshm_dev->pshm_netdev->stats.tx_bytes +=
- frmlen;
- dev_kfree_skb_irq(skb);
-
- /* Fill in the shared memory packet descriptor area. */
- pck_desc = (struct shm_pck_desc *) (pbuf->desc_vptr);
- /* Forward to current frame. */
- pck_desc += pbuf->frames;
- pck_desc->frm_ofs = (pbuf->phy_addr -
- pshm_drv->shm_base_addr) +
- pbuf->frm_ofs;
- pck_desc->frm_len = frmlen;
- /* Terminate packet descriptor area. */
- pck_desc++;
- pck_desc->frm_ofs = 0;
- /* Update buffer parameters. */
- pbuf->frames++;
- pbuf->frm_ofs += frmlen + (frmlen % 32);
-
- } while (pbuf->frames < SHM_MAX_FRMS_PER_BUF);
-
- /* Assign buffer as full. */
- list_add_tail(&pbuf->list, &pshm_drv->tx_full_list);
- append = 0;
- mbox_msg |= SHM_SET_FULL(pbuf->index);
-send_msg:
- spin_unlock_irqrestore(&pshm_drv->lock, flags);
-
- if (mbox_msg)
- pshm_drv->pshm_dev->pshmdev_mbxsend
- (pshm_drv->pshm_dev->shm_id, mbox_msg);
- } while (mbox_msg);
-}
-
-static int shm_netdev_tx(struct sk_buff *skb, struct net_device *shm_netdev)
-{
- struct shmdrv_layer *pshm_drv;
-
- pshm_drv = netdev_priv(shm_netdev);
-
- skb_queue_tail(&pshm_drv->sk_qhead, skb);
-
- /* Schedule Tx work queue. for deferred processing of skbs*/
- if (!work_pending(&pshm_drv->shm_tx_work))
- queue_work(pshm_drv->pshm_tx_workqueue, &pshm_drv->shm_tx_work);
-
- return 0;
-}
-
-static const struct net_device_ops netdev_ops = {
- .ndo_open = shm_netdev_open,
- .ndo_stop = shm_netdev_close,
- .ndo_start_xmit = shm_netdev_tx,
-};
-
-static void shm_netdev_setup(struct net_device *pshm_netdev)
-{
- struct shmdrv_layer *pshm_drv;
- pshm_netdev->netdev_ops = &netdev_ops;
-
- pshm_netdev->mtu = CAIF_MAX_MTU;
- pshm_netdev->type = ARPHRD_CAIF;
- pshm_netdev->hard_header_len = CAIF_NEEDED_HEADROOM;
- pshm_netdev->tx_queue_len = 0;
- pshm_netdev->destructor = free_netdev;
-
- pshm_drv = netdev_priv(pshm_netdev);
-
- /* Initialize structures in a clean state. */
- memset(pshm_drv, 0, sizeof(struct shmdrv_layer));
-
- pshm_drv->cfdev.link_select = CAIF_LINK_LOW_LATENCY;
-}
-
-int caif_shmcore_probe(struct shmdev_layer *pshm_dev)
-{
- int result, j;
- struct shmdrv_layer *pshm_drv = NULL;
-
- pshm_dev->pshm_netdev = alloc_netdev(sizeof(struct shmdrv_layer),
- "cfshm%d", shm_netdev_setup);
- if (!pshm_dev->pshm_netdev)
- return -ENOMEM;
-
- pshm_drv = netdev_priv(pshm_dev->pshm_netdev);
- pshm_drv->pshm_dev = pshm_dev;
-
- /*
- * Initialization starts with the verification of the
- * availability of MBX driver by calling its setup function.
- * MBX driver must be available by this time for proper
- * functioning of SHM driver.
- */
- if ((pshm_dev->pshmdev_mbxsetup
- (caif_shmdrv_rx_cb, pshm_dev, pshm_drv)) != 0) {
- pr_warn("Could not config. SHM Mailbox,"
- " Bailing out.....\n");
- free_netdev(pshm_dev->pshm_netdev);
- return -ENODEV;
- }
-
- skb_queue_head_init(&pshm_drv->sk_qhead);
-
- pr_info("SHM DEVICE[%d] PROBED BY DRIVER, NEW SHM DRIVER"
- " INSTANCE AT pshm_drv =0x%p\n",
- pshm_drv->pshm_dev->shm_id, pshm_drv);
-
- if (pshm_dev->shm_total_sz <
- (NR_TX_BUF * TX_BUF_SZ + NR_RX_BUF * RX_BUF_SZ)) {
-
- pr_warn("ERROR, Amount of available"
- " Phys. SHM cannot accommodate current SHM "
- "driver configuration, Bailing out ...\n");
- free_netdev(pshm_dev->pshm_netdev);
- return -ENOMEM;
- }
-
- pshm_drv->shm_base_addr = pshm_dev->shm_base_addr;
- pshm_drv->shm_tx_addr = pshm_drv->shm_base_addr;
-
- if (pshm_dev->shm_loopback)
- pshm_drv->shm_rx_addr = pshm_drv->shm_tx_addr;
- else
- pshm_drv->shm_rx_addr = pshm_dev->shm_base_addr +
- (NR_TX_BUF * TX_BUF_SZ);
-
- spin_lock_init(&pshm_drv->lock);
- INIT_LIST_HEAD(&pshm_drv->tx_empty_list);
- INIT_LIST_HEAD(&pshm_drv->tx_pend_list);
- INIT_LIST_HEAD(&pshm_drv->tx_full_list);
-
- INIT_LIST_HEAD(&pshm_drv->rx_empty_list);
- INIT_LIST_HEAD(&pshm_drv->rx_pend_list);
- INIT_LIST_HEAD(&pshm_drv->rx_full_list);
-
- INIT_WORK(&pshm_drv->shm_tx_work, shm_tx_work_func);
- INIT_WORK(&pshm_drv->shm_rx_work, shm_rx_work_func);
-
- pshm_drv->pshm_tx_workqueue =
- create_singlethread_workqueue("shm_tx_work");
- pshm_drv->pshm_rx_workqueue =
- create_singlethread_workqueue("shm_rx_work");
-
- for (j = 0; j < NR_TX_BUF; j++) {
- struct buf_list *tx_buf =
- kmalloc(sizeof(struct buf_list), GFP_KERNEL);
-
- if (tx_buf == NULL) {
- free_netdev(pshm_dev->pshm_netdev);
- return -ENOMEM;
- }
- tx_buf->index = j;
- tx_buf->phy_addr = pshm_drv->shm_tx_addr + (TX_BUF_SZ * j);
- tx_buf->len = TX_BUF_SZ;
- tx_buf->frames = 0;
- tx_buf->frm_ofs = SHM_CAIF_FRM_OFS;
-
- if (pshm_dev->shm_loopback)
- tx_buf->desc_vptr = (unsigned char *)tx_buf->phy_addr;
- else
- /*
- * FIXME: the result of ioremap is not a pointer - arnd
- */
- tx_buf->desc_vptr =
- ioremap(tx_buf->phy_addr, TX_BUF_SZ);
-
- list_add_tail(&tx_buf->list, &pshm_drv->tx_empty_list);
- }
-
- for (j = 0; j < NR_RX_BUF; j++) {
- struct buf_list *rx_buf =
- kmalloc(sizeof(struct buf_list), GFP_KERNEL);
-
- if (rx_buf == NULL) {
- free_netdev(pshm_dev->pshm_netdev);
- return -ENOMEM;
- }
- rx_buf->index = j;
- rx_buf->phy_addr = pshm_drv->shm_rx_addr + (RX_BUF_SZ * j);
- rx_buf->len = RX_BUF_SZ;
-
- if (pshm_dev->shm_loopback)
- rx_buf->desc_vptr = (unsigned char *)rx_buf->phy_addr;
- else
- rx_buf->desc_vptr =
- ioremap(rx_buf->phy_addr, RX_BUF_SZ);
- list_add_tail(&rx_buf->list, &pshm_drv->rx_empty_list);
- }
-
- pshm_drv->tx_empty_available = 1;
- result = register_netdev(pshm_dev->pshm_netdev);
- if (result)
- pr_warn("ERROR[%d], SHM could not, "
- "register with NW FRMWK Bailing out ...\n", result);
-
- return result;
-}
-
-void caif_shmcore_remove(struct net_device *pshm_netdev)
-{
- struct buf_list *pbuf;
- struct shmdrv_layer *pshm_drv = NULL;
-
- pshm_drv = netdev_priv(pshm_netdev);
-
- while (!(list_empty(&pshm_drv->tx_pend_list))) {
- pbuf =
- list_entry(pshm_drv->tx_pend_list.next,
- struct buf_list, list);
-
- list_del(&pbuf->list);
- kfree(pbuf);
- }
-
- while (!(list_empty(&pshm_drv->tx_full_list))) {
- pbuf =
- list_entry(pshm_drv->tx_full_list.next,
- struct buf_list, list);
- list_del(&pbuf->list);
- kfree(pbuf);
- }
-
- while (!(list_empty(&pshm_drv->tx_empty_list))) {
- pbuf =
- list_entry(pshm_drv->tx_empty_list.next,
- struct buf_list, list);
- list_del(&pbuf->list);
- kfree(pbuf);
- }
-
- while (!(list_empty(&pshm_drv->rx_full_list))) {
- pbuf =
- list_entry(pshm_drv->tx_full_list.next,
- struct buf_list, list);
- list_del(&pbuf->list);
- kfree(pbuf);
- }
-
- while (!(list_empty(&pshm_drv->rx_pend_list))) {
- pbuf =
- list_entry(pshm_drv->tx_pend_list.next,
- struct buf_list, list);
- list_del(&pbuf->list);
- kfree(pbuf);
- }
-
- while (!(list_empty(&pshm_drv->rx_empty_list))) {
- pbuf =
- list_entry(pshm_drv->rx_empty_list.next,
- struct buf_list, list);
- list_del(&pbuf->list);
- kfree(pbuf);
- }
-
- /* Destroy work queues. */
- destroy_workqueue(pshm_drv->pshm_tx_workqueue);
- destroy_workqueue(pshm_drv->pshm_rx_workqueue);
-
- unregister_netdev(pshm_netdev);
-}
config CAN_AT91
tristate "Atmel AT91 onchip CAN controller"
- depends on ARCH_AT91SAM9263 || ARCH_AT91SAM9X5
+ depends on ARM
---help---
This is a driver for the SoC CAN controller in Atmel's AT91SAM9263
and AT91SAM9X5 processors.
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netdevice.h>
+#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/rtnetlink.h>
#include <linux/skbuff.h>
canid_t mb0_id;
};
-static const struct at91_devtype_data at91_devtype_data[] = {
- [AT91_DEVTYPE_SAM9263] = {
- .rx_first = 1,
- .rx_split = 8,
- .rx_last = 11,
- .tx_shift = 2,
- },
- [AT91_DEVTYPE_SAM9X5] = {
- .rx_first = 0,
- .rx_split = 4,
- .rx_last = 5,
- .tx_shift = 1,
- },
+static const struct at91_devtype_data at91_at91sam9263_data = {
+ .rx_first = 1,
+ .rx_split = 8,
+ .rx_last = 11,
+ .tx_shift = 2,
+ .type = AT91_DEVTYPE_SAM9263,
+};
+
+static const struct at91_devtype_data at91_at91sam9x5_data = {
+ .rx_first = 0,
+ .rx_split = 4,
+ .rx_last = 5,
+ .tx_shift = 1,
+ .type = AT91_DEVTYPE_SAM9X5,
};
static const struct can_bittiming_const at91_bittiming_const = {
.attrs = at91_sysfs_attrs,
};
+#if defined(CONFIG_OF)
+static const struct of_device_id at91_can_dt_ids[] = {
+ {
+ .compatible = "atmel,at91sam9x5-can",
+ .data = &at91_at91sam9x5_data,
+ }, {
+ .compatible = "atmel,at91sam9263-can",
+ .data = &at91_at91sam9263_data,
+ }, {
+ /* sentinel */
+ }
+};
+MODULE_DEVICE_TABLE(of, at91_can_dt_ids);
+#else
+#define at91_can_dt_ids NULL
+#endif
+
+static const struct at91_devtype_data *at91_can_get_driver_data(struct platform_device *pdev)
+{
+ if (pdev->dev.of_node) {
+ const struct of_device_id *match;
+
+ match = of_match_node(at91_can_dt_ids, pdev->dev.of_node);
+ if (!match) {
+ dev_err(&pdev->dev, "no matching node found in dtb\n");
+ return NULL;
+ }
+ return (const struct at91_devtype_data *)match->data;
+ }
+ return (const struct at91_devtype_data *)
+ platform_get_device_id(pdev)->driver_data;
+}
+
static int at91_can_probe(struct platform_device *pdev)
{
const struct at91_devtype_data *devtype_data;
- enum at91_devtype devtype;
struct net_device *dev;
struct at91_priv *priv;
struct resource *res;
void __iomem *addr;
int err, irq;
- devtype = pdev->id_entry->driver_data;
- devtype_data = &at91_devtype_data[devtype];
+ devtype_data = at91_can_get_driver_data(pdev);
+ if (!devtype_data) {
+ dev_err(&pdev->dev, "no driver data\n");
+ err = -ENODEV;
+ goto exit;
+ }
clk = clk_get(&pdev->dev, "can_clk");
if (IS_ERR(clk)) {
priv->dev = dev;
priv->reg_base = addr;
priv->devtype_data = *devtype_data;
- priv->devtype_data.type = devtype;
priv->clk = clk;
priv->pdata = pdev->dev.platform_data;
priv->mb0_id = 0x7ff;
static const struct platform_device_id at91_can_id_table[] = {
{
.name = "at91_can",
- .driver_data = AT91_DEVTYPE_SAM9263,
+ .driver_data = (kernel_ulong_t)&at91_at91sam9x5_data,
}, {
.name = "at91sam9x5_can",
- .driver_data = AT91_DEVTYPE_SAM9X5,
+ .driver_data = (kernel_ulong_t)&at91_at91sam9263_data,
}, {
/* sentinel */
}
.driver = {
.name = KBUILD_MODNAME,
.owner = THIS_MODULE,
+ .of_match_table = at91_can_dt_ids,
},
.id_table = at91_can_id_table,
};
return 0;
}
-irqreturn_t bfin_can_interrupt(int irq, void *dev_id)
+static irqreturn_t bfin_can_interrupt(int irq, void *dev_id)
{
struct net_device *dev = dev_id;
struct bfin_can_priv *priv = netdev_priv(dev);
return 0;
}
-struct net_device *alloc_bfin_candev(void)
+static struct net_device *alloc_bfin_candev(void)
{
struct net_device *dev;
struct bfin_can_priv *priv;
return 0;
}
-#ifdef CONFIG_PM
-static int mcp251x_can_suspend(struct spi_device *spi, pm_message_t state)
+#ifdef CONFIG_PM_SLEEP
+
+static int mcp251x_can_suspend(struct device *dev)
{
+ struct spi_device *spi = to_spi_device(dev);
struct mcp251x_platform_data *pdata = spi->dev.platform_data;
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
struct net_device *net = priv->net;
return 0;
}
-static int mcp251x_can_resume(struct spi_device *spi)
+static int mcp251x_can_resume(struct device *dev)
{
+ struct spi_device *spi = to_spi_device(dev);
struct mcp251x_platform_data *pdata = spi->dev.platform_data;
struct mcp251x_priv *priv = dev_get_drvdata(&spi->dev);
enable_irq(spi->irq);
return 0;
}
+
+static SIMPLE_DEV_PM_OPS(mcp251x_can_pm_ops, mcp251x_can_suspend,
+ mcp251x_can_resume);
+#define MCP251X_PM_OPS (&mcp251x_can_pm_ops)
+
#else
-#define mcp251x_can_suspend NULL
-#define mcp251x_can_resume NULL
+#define MCP251X_PM_OPS NULL
#endif
static const struct spi_device_id mcp251x_id_table[] = {
static struct spi_driver mcp251x_can_driver = {
.driver = {
.name = DEVICE_NAME,
- .bus = &spi_bus_type,
.owner = THIS_MODULE,
+ .pm = MCP251X_PM_OPS,
},
.id_table = mcp251x_id_table,
.probe = mcp251x_can_probe,
.remove = mcp251x_can_remove,
- .suspend = mcp251x_can_suspend,
- .resume = mcp251x_can_resume,
};
-
-static int __init mcp251x_can_init(void)
-{
- return spi_register_driver(&mcp251x_can_driver);
-}
-
-static void __exit mcp251x_can_exit(void)
-{
- spi_unregister_driver(&mcp251x_can_driver);
-}
-
-module_init(mcp251x_can_init);
-module_exit(mcp251x_can_exit);
+module_spi_driver(mcp251x_can_driver);
MODULE_AUTHOR("Chris Elston <celston@katalix.com>, "
"Christian Pellegrin <chripell@evolware.org>");
/* allocate a new skb for next time receive */
new_skb = netdev_alloc_skb(dev, PKT_BUF_SZ + NET_IP_ALIGN);
- if (!new_skb) {
- pr_notice("init: low on mem - packet dropped\n");
+ if (!new_skb)
goto init_error;
- }
+
skb_reserve(new_skb, NET_IP_ALIGN);
/* Invidate the data cache of skb->data range when it is write back
* cache. It will prevent overwritting the new data from DMA
new_skb = netdev_alloc_skb(dev, PKT_BUF_SZ + NET_IP_ALIGN);
if (!new_skb) {
- netdev_notice(dev, "rx: low on mem - packet dropped\n");
dev->stats.rx_dropped++;
goto out;
}
}
/* Allocate TX descriptor ring in coherent memory */
- greth->tx_bd_base = (struct greth_bd *) dma_alloc_coherent(greth->dev,
- 1024,
- &greth->tx_bd_base_phys,
- GFP_KERNEL);
-
+ greth->tx_bd_base = dma_alloc_coherent(greth->dev, 1024,
+ &greth->tx_bd_base_phys,
+ GFP_KERNEL | __GFP_ZERO);
if (!greth->tx_bd_base) {
- if (netif_msg_probe(greth))
- dev_err(&dev->dev, "could not allocate descriptor memory.\n");
err = -ENOMEM;
goto error3;
}
- memset(greth->tx_bd_base, 0, 1024);
-
/* Allocate RX descriptor ring in coherent memory */
- greth->rx_bd_base = (struct greth_bd *) dma_alloc_coherent(greth->dev,
- 1024,
- &greth->rx_bd_base_phys,
- GFP_KERNEL);
-
+ greth->rx_bd_base = dma_alloc_coherent(greth->dev, 1024,
+ &greth->rx_bd_base_phys,
+ GFP_KERNEL | __GFP_ZERO);
if (!greth->rx_bd_base) {
- if (netif_msg_probe(greth))
- dev_err(greth->dev, "could not allocate descriptor memory.\n");
err = -ENOMEM;
goto error4;
}
- memset(greth->rx_bd_base, 0, 1024);
-
/* Get MAC address from: module param, OF property or ID prom */
for (i = 0; i < 6; i++) {
if (macaddr[i] != 0)