Merge tag 'isci-for-3.5' into misc
James Bottomley [Mon, 21 May 2012 11:17:30 +0000 (12:17 +0100)]
isci update for 3.5

1/ Rework remote-node-context (RNC) handling for proper management of
   the silicon state machine in error handling and hot-plug conditions.
   Further details below, suffice to say if the RNC is mismanaged the
   silicon state machines may lock up.

2/ Refactor the initialization code to be reused for suspend/resume support

3/ Miscellaneous bug fixes to address discovery issues and hardware
   compatibility.

RNC rework details from Jeff Skirvin:

In the controller, devices as they appear on a SAS domain (or
direct-attached SATA devices) are represented by memory structures known
as "Remote Node Contexts" (RNCs).  These structures are transferred from
main memory to the controller using a set of register commands; these
commands include setting up the context ("posting"), removing the
context ("invalidating"), and commands to control the scheduling of
commands and connections to that remote device ("suspensions" and
"resumptions").  There is a similar path to control RNC scheduling from
the protocol engine, which interprets the results of command and data
transmission and reception.

In general, the controller chooses among non-suspended RNCs to find one
that has work requiring scheduling the transmission of command and data
frames to a target.  Likewise, when a target tries to return data back
to the initiator, the state of the RNC is used by the controller to
determine how to treat the incoming request. As an example, if the RNC
is in the state "TX/RX Suspended", incoming SSP connection requests from
the target will be rejected by the controller hardware.  When an RNC is
"TX Suspended", it will not be selected by the controller hardware to
start outgoing command or data operations (with certain priority-based
exceptions).

As mentioned above, there are two sources for management of the RNC
states: commands from driver software, and the result of transmission
and reception conditions of commands and data signaled by the controller
hardware.  As an example of the latter, if an outgoing SSP command ends
with a OPEN_REJECT(BAD_DESTINATION) status, the RNC state will
transition to the "TX Suspended" state, and this is signaled by the
controller hardware in the status to the completion of the pending
command as well as signaled in a controller hardware event.  Examples of
the former are included in the patch changelogs.

Driver software is required to suspend the RNC in a "TX/RX Suspended"
condition before any outstanding commands can be terminated.  Failure to
guarantee this can lead to a complete hardware hang condition.  Earlier
versions of the driver software did not guarantee that an RNC was
correctly managed before I/O termination, and so operated in an unsafe
way.

Further, the driver performed unnecessary contortions to preserve the
remote device command state and so was more complicated than it needed
to be.  A simplifying driver assumption is that once an I/O has entered
the error handler path without having completed in the target, the
requirement on the driver is that all use of the sas_task must end.
Beyond that, recovery of operation is dependent on libsas and other
components to reset, rediscover and reconfigure the device before normal
operation can restart.  In the driver, this simplifying assumption meant
that the RNC management could be reduced to entry into the suspended
state, terminating the targeted I/O request, and resuming the RNC as
needed for device-specific management such as an SSP Abort Task or LUN
Reset Management request.

76 files changed:
Documentation/scsi/ChangeLog.megaraid_sas
MAINTAINERS
drivers/scsi/Kconfig
drivers/scsi/aacraid/src.c
drivers/scsi/be2iscsi/be.h
drivers/scsi/be2iscsi/be_cmds.c
drivers/scsi/be2iscsi/be_cmds.h
drivers/scsi/be2iscsi/be_iscsi.c
drivers/scsi/be2iscsi/be_iscsi.h
drivers/scsi/be2iscsi/be_main.c
drivers/scsi/be2iscsi/be_main.h
drivers/scsi/be2iscsi/be_mgmt.c
drivers/scsi/be2iscsi/be_mgmt.h
drivers/scsi/bfa/bfa_fcs.h
drivers/scsi/bfa/bfa_fcs_lport.c
drivers/scsi/bfa/bfad.c
drivers/scsi/bfa/bfad_attr.c
drivers/scsi/bnx2i/57xx_iscsi_constants.h
drivers/scsi/bnx2i/57xx_iscsi_hsi.h
drivers/scsi/bnx2i/bnx2i.h
drivers/scsi/bnx2i/bnx2i_hwi.c
drivers/scsi/bnx2i/bnx2i_init.c
drivers/scsi/bnx2i/bnx2i_iscsi.c
drivers/scsi/bnx2i/bnx2i_sysfs.c
drivers/scsi/device_handler/scsi_dh_alua.c
drivers/scsi/fcoe/fcoe.c
drivers/scsi/fcoe/fcoe.h
drivers/scsi/fcoe/fcoe_ctlr.c
drivers/scsi/hpsa.c
drivers/scsi/hpsa.h
drivers/scsi/hpsa_cmd.h
drivers/scsi/libfc/fc_lport.c
drivers/scsi/lpfc/lpfc.h
drivers/scsi/lpfc/lpfc_bsg.c
drivers/scsi/lpfc/lpfc_bsg.h
drivers/scsi/lpfc/lpfc_crtn.h
drivers/scsi/lpfc/lpfc_debugfs.c
drivers/scsi/lpfc/lpfc_debugfs.h
drivers/scsi/lpfc/lpfc_els.c
drivers/scsi/lpfc/lpfc_hbadisc.c
drivers/scsi/lpfc/lpfc_hw.h
drivers/scsi/lpfc/lpfc_hw4.h
drivers/scsi/lpfc/lpfc_init.c
drivers/scsi/lpfc/lpfc_nportdisc.c
drivers/scsi/lpfc/lpfc_scsi.c
drivers/scsi/lpfc/lpfc_sli.c
drivers/scsi/lpfc/lpfc_sli.h
drivers/scsi/lpfc/lpfc_sli4.h
drivers/scsi/lpfc/lpfc_version.h
drivers/scsi/megaraid/megaraid_sas.h
drivers/scsi/megaraid/megaraid_sas_base.c
drivers/scsi/megaraid/megaraid_sas_fp.c
drivers/scsi/megaraid/megaraid_sas_fusion.c
drivers/scsi/mpt2sas/mpi/mpi2.h
drivers/scsi/mpt2sas/mpi/mpi2_cnfg.h
drivers/scsi/mpt2sas/mpt2sas_base.c
drivers/scsi/mpt2sas/mpt2sas_base.h
drivers/scsi/mpt2sas/mpt2sas_ctl.c
drivers/scsi/mpt2sas/mpt2sas_scsih.c
drivers/scsi/mpt2sas/mpt2sas_transport.c
drivers/scsi/pm8001/pm8001_defs.h
drivers/scsi/pm8001/pm8001_hwi.c
drivers/scsi/pm8001/pm8001_hwi.h
drivers/scsi/pm8001/pm8001_init.c
drivers/scsi/scsi.c
drivers/scsi/scsi_lib.c
drivers/scsi/scsi_pm.c
drivers/scsi/scsi_priv.h
drivers/scsi/scsi_transport_fc.c
drivers/scsi/scsi_transport_spi.c
drivers/scsi/sd.c
drivers/scsi/sg.c
drivers/scsi/st.h
drivers/scsi/storvsc_drv.c
drivers/scsi/ufs/ufshcd.c
include/scsi/iscsi_proto.h

index 83f8ea8..80441ab 100644 (file)
@@ -1,3 +1,11 @@
+Release Date    : Mon. Mar 19, 2012 17:00:00 PST 2012 -
+                       (emaild-id:megaraidlinux@lsi.com)
+                       Adam Radford
+Current Version : 00.00.06.15-rc1
+Old Version     : 00.00.06.14-rc1
+    1. Optimize HostMSIxVectors setting.
+    2. Add fpRead/WriteCapable, fpRead/WriteAcrossStripe checks.
+-------------------------------------------------------------------------------
 Release Date    : Fri. Jan 6, 2012 17:00:00 PST 2010 -
                        (emaild-id:megaraidlinux@lsi.com)
                        Adam Radford
index b362709..8795204 100644 (file)
@@ -1598,6 +1598,7 @@ F:        include/linux/bcma/
 
 BROCADE BFA FC SCSI DRIVER
 M:     Jing Huang <huangj@brocade.com>
+M:     Krishna C Gudipati <kgudipat@brocade.com>
 L:     linux-scsi@vger.kernel.org
 S:     Supported
 F:     drivers/scsi/bfa/
@@ -6883,6 +6884,14 @@ F:       Documentation/cdrom/
 F:     drivers/cdrom/cdrom.c
 F:     include/linux/cdrom.h
 
+UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER
+M:     Vinayak Holikatti <vinholikatti@gmail.com>
+M:     Santosh Y <santoshsy@gmail.com>
+L:     linux-scsi@vger.kernel.org
+S:     Supported
+F:     Documentation/scsi/ufs.txt
+F:     drivers/scsi/ufs/
+
 UNSORTED BLOCK IMAGES (UBI)
 M:     Artem Bityutskiy <dedekind1@gmail.com>
 W:     http://www.linux-mtd.infradead.org/
index 29684c8..bea04e5 100644 (file)
@@ -408,6 +408,7 @@ config BLK_DEV_3W_XXXX_RAID
 config SCSI_HPSA
        tristate "HP Smart Array SCSI driver"
        depends on PCI && SCSI
+       select CHECK_SIGNATURE
        help
          This driver supports HP Smart Array Controllers (circa 2009).
          It is a SCSI alternative to the cciss driver, which is a block
index 2bee515..7628206 100644 (file)
@@ -424,6 +424,8 @@ static int aac_src_deliver_message(struct fib *fib)
 static int aac_src_ioremap(struct aac_dev *dev, u32 size)
 {
        if (!size) {
+               iounmap(dev->regs.src.bar1);
+               dev->regs.src.bar1 = NULL;
                iounmap(dev->regs.src.bar0);
                dev->base = dev->regs.src.bar0 = NULL;
                return 0;
index 1d7b976..a50b6a9 100644 (file)
@@ -132,10 +132,6 @@ struct be_ctrl_info {
                ((u32)((((size_t)(_address) & (PAGE_SIZE_4K - 1)) +     \
                        (size) + (PAGE_SIZE_4K - 1)) >> PAGE_SHIFT_4K))
 
-/* Byte offset into the page corresponding to given address */
-#define OFFSET_IN_PAGE(addr)                                           \
-               ((size_t)(addr) & (PAGE_SIZE_4K-1))
-
 /* Returns bit offset within a DWORD of a bitfield */
 #define AMAP_BIT_OFFSET(_struct, field)                                        \
                (((size_t)&(((_struct *)0)->field))%32)
index cdb1536..d2e9e93 100644 (file)
@@ -15,6 +15,8 @@
  * Costa Mesa, CA 92626
  */
 
+#include <scsi/iscsi_proto.h>
+
 #include "be.h"
 #include "be_mgmt.h"
 #include "be_main.h"
index 8b40a5b..b0b36c6 100644 (file)
@@ -23,7 +23,7 @@
  * firmware in the BE. These requests are communicated to the processor
  * using Work Request Blocks (WRBs) submitted to the MCC-WRB ring or via one
  * WRB inside a MAILBOX.
- * The commands are serviced by the ARM processor in the BladeEngine's MPU.
+ * The commands are serviced by the ARM processor in the OneConnect's MPU.
  */
 struct be_sge {
        u32 pa_lo;
@@ -163,7 +163,8 @@ struct be_mcc_mailbox {
 #define OPCODE_COMMON_ISCSI_CFG_REMOVE_SGL_PAGES        3
 #define OPCODE_COMMON_ISCSI_NTWK_GET_NIC_CONFIG                7
 #define OPCODE_COMMON_ISCSI_NTWK_SET_VLAN              14
-#define OPCODE_COMMON_ISCSI_NTWK_CONFIGURE_STATELESS_IP_ADDR   17
+#define OPCODE_COMMON_ISCSI_NTWK_CONFIG_STATELESS_IP_ADDR      17
+#define OPCODE_COMMON_ISCSI_NTWK_REL_STATELESS_IP_ADDR 18
 #define OPCODE_COMMON_ISCSI_NTWK_MODIFY_IP_ADDR                21
 #define OPCODE_COMMON_ISCSI_NTWK_GET_DEFAULT_GATEWAY   22
 #define OPCODE_COMMON_ISCSI_NTWK_MODIFY_DEFAULT_GATEWAY 23
@@ -274,15 +275,15 @@ struct mgmt_conn_login_options {
        struct  mgmt_auth_method_format auth_data;
 } __packed;
 
-struct ip_address_format {
+struct ip_addr_format {
        u16 size_of_structure;
        u8 reserved;
        u8 ip_type;
-       u8 ip_address[16];
+       u8 addr[16];
        u32 rsvd0;
 } __packed;
 
-struct mgmt_conn_info {
+struct mgmt_conn_info {
        u32     connection_handle;
        u32     connection_status;
        u16     src_port;
@@ -290,9 +291,9 @@ struct      mgmt_conn_info {
        u16     dest_port_redirected;
        u16     cid;
        u32     estimated_throughput;
-       struct  ip_address_format       src_ipaddr;
-       struct  ip_address_format       dest_ipaddr;
-       struct  ip_address_format       dest_ipaddr_redirected;
+       struct  ip_addr_format  src_ipaddr;
+       struct  ip_addr_format  dest_ipaddr;
+       struct  ip_addr_format  dest_ipaddr_redirected;
        struct  mgmt_conn_login_options negotiated_login_options;
 } __packed;
 
@@ -322,43 +323,115 @@ struct mgmt_session_info {
        struct  mgmt_conn_info  conn_list[1];
 } __packed;
 
-struct  be_cmd_req_get_session {
+struct be_cmd_get_session_req {
        struct be_cmd_req_hdr hdr;
        u32 session_handle;
 } __packed;
 
-struct  be_cmd_resp_get_session {
+struct be_cmd_get_session_resp {
        struct be_cmd_resp_hdr hdr;
        struct mgmt_session_info session_info;
 } __packed;
 
 struct mac_addr {
-       u16 size_of_struct;
+       u16 size_of_structure;
        u8 addr[ETH_ALEN];
 } __packed;
 
-struct be_cmd_req_get_boot_target {
+struct be_cmd_get_boot_target_req {
        struct be_cmd_req_hdr hdr;
 } __packed;
 
-struct be_cmd_resp_get_boot_target {
+struct be_cmd_get_boot_target_resp {
        struct be_cmd_resp_hdr hdr;
        u32  boot_session_count;
        int  boot_session_handle;
 };
 
-struct be_cmd_req_mac_query {
+struct be_cmd_mac_query_req {
        struct be_cmd_req_hdr hdr;
        u8 type;
        u8 permanent;
        u16 if_id;
 } __packed;
 
-struct be_cmd_resp_mac_query {
+struct be_cmd_get_mac_resp {
        struct be_cmd_resp_hdr hdr;
        struct mac_addr mac;
 };
 
+struct be_ip_addr_subnet_format {
+       u16 size_of_structure;
+       u8 ip_type;
+       u8 ipv6_prefix_length;
+       u8 addr[16];
+       u8 subnet_mask[16];
+       u32 rsvd0;
+} __packed;
+
+struct be_cmd_get_if_info_req {
+       struct be_cmd_req_hdr hdr;
+       u32 interface_hndl;
+       u32 ip_type;
+} __packed;
+
+struct be_cmd_get_if_info_resp {
+       struct be_cmd_req_hdr hdr;
+       u32 interface_hndl;
+       u32 vlan_priority;
+       u32 ip_addr_count;
+       u32 dhcp_state;
+       struct be_ip_addr_subnet_format ip_addr;
+} __packed;
+
+struct be_ip_addr_record {
+       u32 action;
+       u32 interface_hndl;
+       struct be_ip_addr_subnet_format ip_addr;
+       u32 status;
+} __packed;
+
+struct be_ip_addr_record_params {
+       u32 record_entry_count;
+       struct be_ip_addr_record ip_record;
+} __packed;
+
+struct be_cmd_set_ip_addr_req {
+       struct be_cmd_req_hdr hdr;
+       struct be_ip_addr_record_params ip_params;
+} __packed;
+
+
+struct be_cmd_set_dhcp_req {
+       struct be_cmd_req_hdr hdr;
+       u32 interface_hndl;
+       u32 ip_type;
+       u32 flags;
+       u32 retry_count;
+} __packed;
+
+struct be_cmd_rel_dhcp_req {
+       struct be_cmd_req_hdr hdr;
+       u32 interface_hndl;
+       u32 ip_type;
+} __packed;
+
+struct be_cmd_set_def_gateway_req {
+       struct be_cmd_req_hdr hdr;
+       u32 action;
+       struct ip_addr_format ip_addr;
+} __packed;
+
+struct be_cmd_get_def_gateway_req {
+       struct be_cmd_req_hdr hdr;
+       u32 ip_type;
+} __packed;
+
+struct be_cmd_get_def_gateway_resp {
+       struct be_cmd_req_hdr hdr;
+       struct ip_addr_format ip_addr;
+} __packed;
+
 /******************** Create CQ ***************************/
 /**
  * Pseudo amap definition in which each bit of the actual structure is defined
@@ -489,7 +562,7 @@ struct be_cmd_req_modify_eq_delay {
 
 #define ETH_ALEN       6
 
-struct be_cmd_req_get_mac_addr {
+struct be_cmd_get_nic_conf_req {
        struct be_cmd_req_hdr hdr;
        u32 nic_port_count;
        u32 speed;
@@ -501,7 +574,7 @@ struct be_cmd_req_get_mac_addr {
        u32 rsvd[23];
 };
 
-struct be_cmd_resp_get_mac_addr {
+struct be_cmd_get_nic_conf_resp {
        struct be_cmd_resp_hdr hdr;
        u32 nic_port_count;
        u32 speed;
@@ -513,6 +586,39 @@ struct be_cmd_resp_get_mac_addr {
        u32 rsvd[23];
 };
 
+#define BEISCSI_ALIAS_LEN 32
+
+struct be_cmd_hba_name {
+       struct be_cmd_req_hdr hdr;
+       u16 flags;
+       u16 rsvd0;
+       u8 initiator_name[ISCSI_NAME_LEN];
+       u8 initiator_alias[BEISCSI_ALIAS_LEN];
+} __packed;
+
+struct be_cmd_ntwk_link_status_req {
+       struct be_cmd_req_hdr hdr;
+       u32 rsvd0;
+} __packed;
+
+/*** Port Speed Values ***/
+#define BE2ISCSI_LINK_SPEED_ZERO       0x00
+#define BE2ISCSI_LINK_SPEED_10MBPS     0x01
+#define BE2ISCSI_LINK_SPEED_100MBPS    0x02
+#define BE2ISCSI_LINK_SPEED_1GBPS      0x03
+#define BE2ISCSI_LINK_SPEED_10GBPS     0x04
+struct be_cmd_ntwk_link_status_resp {
+       struct be_cmd_resp_hdr hdr;
+       u8 phys_port;
+       u8 mac_duplex;
+       u8 mac_speed;
+       u8 mac_fault;
+       u8 mgmt_mac_duplex;
+       u8 mgmt_mac_speed;
+       u16 qos_link_speed;
+       u32 logical_link_speed;
+} __packed;
+
 int beiscsi_cmd_eq_create(struct be_ctrl_info *ctrl,
                          struct be_queue_info *eq, int eq_delay);
 
@@ -530,11 +636,8 @@ int beiscsi_cmd_mccq_create(struct beiscsi_hba *phba,
 int be_poll_mcc(struct be_ctrl_info *ctrl);
 int mgmt_check_supported_fw(struct be_ctrl_info *ctrl,
                                      struct beiscsi_hba *phba);
-unsigned int be_cmd_get_mac_addr(struct beiscsi_hba *phba);
-unsigned int beiscsi_get_boot_target(struct beiscsi_hba *phba);
-unsigned int beiscsi_get_session_info(struct beiscsi_hba *phba,
-                                 u32 boot_session_handle,
-                                 struct be_dma_mem *nonemb_cmd);
+unsigned int be_cmd_get_initname(struct beiscsi_hba *phba);
+unsigned int be_cmd_get_port_speed(struct beiscsi_hba *phba);
 
 void free_mcc_tag(struct be_ctrl_info *ctrl, unsigned int tag);
 /*ISCSI Functuions */
@@ -715,7 +818,7 @@ struct be_eq_delay_params_in {
 
 struct tcp_connect_and_offload_in {
        struct be_cmd_req_hdr hdr;
-       struct ip_address_format ip_address;
+       struct ip_addr_format ip_address;
        u16 tcp_port;
        u16 cid;
        u16 cq_id;
@@ -792,13 +895,14 @@ struct be_fw_cfg {
        u32 function_caps;
 } __packed;
 
-struct be_all_if_id {
+struct be_cmd_get_all_if_id_req {
        struct be_cmd_req_hdr hdr;
        u32 if_count;
        u32 if_hndl_list[1];
 } __packed;
 
 #define ISCSI_OPCODE_SCSI_DATA_OUT             5
+#define OPCODE_COMMON_NTWK_LINK_STATUS_QUERY 5
 #define OPCODE_COMMON_MODIFY_EQ_DELAY          41
 #define OPCODE_COMMON_ISCSI_CLEANUP            59
 #define        OPCODE_COMMON_TCP_UPLOAD                56
@@ -810,6 +914,8 @@ struct be_all_if_id {
 #define OPCODE_ISCSI_INI_DRIVER_OFFLOAD_SESSION 41
 #define OPCODE_ISCSI_INI_DRIVER_INVALIDATE_CONNECTION 42
 #define OPCODE_ISCSI_INI_BOOT_GET_BOOT_TARGET  52
+#define OPCODE_COMMON_WRITE_FLASH              96
+#define OPCODE_COMMON_READ_FLASH               97
 
 /* --- CMD_ISCSI_INVALIDATE_CONNECTION_TYPE --- */
 #define CMD_ISCSI_COMMAND_INVALIDATE           1
index 33c8f09..43f3503 100644 (file)
@@ -23,6 +23,8 @@
 #include <scsi/scsi_cmnd.h>
 #include <scsi/scsi_device.h>
 #include <scsi/scsi_host.h>
+#include <scsi/scsi_netlink.h>
+#include <net/netlink.h>
 #include <scsi/scsi.h>
 
 #include "be_iscsi.h"
@@ -207,6 +209,301 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
        return beiscsi_bindconn_cid(phba, beiscsi_conn, beiscsi_ep->ep_cid);
 }
 
+static int beiscsi_create_ipv4_iface(struct beiscsi_hba *phba)
+{
+       if (phba->ipv4_iface)
+               return 0;
+
+       phba->ipv4_iface = iscsi_create_iface(phba->shost,
+                                             &beiscsi_iscsi_transport,
+                                             ISCSI_IFACE_TYPE_IPV4,
+                                             0, 0);
+       if (!phba->ipv4_iface) {
+               shost_printk(KERN_ERR, phba->shost, "Could not "
+                            "create default IPv4 address.\n");
+               return -ENODEV;
+       }
+
+       return 0;
+}
+
+static int beiscsi_create_ipv6_iface(struct beiscsi_hba *phba)
+{
+       if (phba->ipv6_iface)
+               return 0;
+
+       phba->ipv6_iface = iscsi_create_iface(phba->shost,
+                                             &beiscsi_iscsi_transport,
+                                             ISCSI_IFACE_TYPE_IPV6,
+                                             0, 0);
+       if (!phba->ipv6_iface) {
+               shost_printk(KERN_ERR, phba->shost, "Could not "
+                            "create default IPv6 address.\n");
+               return -ENODEV;
+       }
+
+       return 0;
+}
+
+void beiscsi_create_def_ifaces(struct beiscsi_hba *phba)
+{
+       struct be_cmd_get_if_info_resp if_info;
+
+       if (!mgmt_get_if_info(phba, BE2_IPV4, &if_info))
+               beiscsi_create_ipv4_iface(phba);
+
+       if (!mgmt_get_if_info(phba, BE2_IPV6, &if_info))
+               beiscsi_create_ipv6_iface(phba);
+}
+
+void beiscsi_destroy_def_ifaces(struct beiscsi_hba *phba)
+{
+       if (phba->ipv6_iface)
+               iscsi_destroy_iface(phba->ipv6_iface);
+       if (phba->ipv4_iface)
+               iscsi_destroy_iface(phba->ipv4_iface);
+}
+
+static int
+beiscsi_set_static_ip(struct Scsi_Host *shost,
+               struct iscsi_iface_param_info *iface_param,
+               void *data, uint32_t dt_len)
+{
+       struct beiscsi_hba *phba = iscsi_host_priv(shost);
+       struct iscsi_iface_param_info *iface_ip = NULL;
+       struct iscsi_iface_param_info *iface_subnet = NULL;
+       struct nlattr *nla;
+       int ret;
+
+
+       switch (iface_param->param) {
+       case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
+               nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_ADDR);
+               if (nla)
+                       iface_ip = nla_data(nla);
+
+               nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_SUBNET);
+               if (nla)
+                       iface_subnet = nla_data(nla);
+               break;
+       case ISCSI_NET_PARAM_IPV4_ADDR:
+               iface_ip = iface_param;
+               nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_SUBNET);
+               if (nla)
+                       iface_subnet = nla_data(nla);
+               break;
+       case ISCSI_NET_PARAM_IPV4_SUBNET:
+               iface_subnet = iface_param;
+               nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_ADDR);
+               if (nla)
+                       iface_ip = nla_data(nla);
+               break;
+       default:
+               shost_printk(KERN_ERR, shost, "Unsupported param %d\n",
+                            iface_param->param);
+       }
+
+       if (!iface_ip || !iface_subnet) {
+               shost_printk(KERN_ERR, shost, "IP and Subnet Mask required\n");
+               return -EINVAL;
+       }
+
+       ret = mgmt_set_ip(phba, iface_ip, iface_subnet,
+                       ISCSI_BOOTPROTO_STATIC);
+
+       return ret;
+}
+
+static int
+beiscsi_set_ipv4(struct Scsi_Host *shost,
+               struct iscsi_iface_param_info *iface_param,
+               void *data, uint32_t dt_len)
+{
+       struct beiscsi_hba *phba = iscsi_host_priv(shost);
+       int ret = 0;
+
+       /* Check the param */
+       switch (iface_param->param) {
+       case ISCSI_NET_PARAM_IPV4_GW:
+               ret = mgmt_set_gateway(phba, iface_param);
+               break;
+       case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
+               if (iface_param->value[0] == ISCSI_BOOTPROTO_DHCP)
+                       ret = mgmt_set_ip(phba, iface_param,
+                                       NULL, ISCSI_BOOTPROTO_DHCP);
+               else if (iface_param->value[0] == ISCSI_BOOTPROTO_STATIC)
+                       ret = beiscsi_set_static_ip(shost, iface_param,
+                                                   data, dt_len);
+               else
+                       shost_printk(KERN_ERR, shost, "Invalid BOOTPROTO: %d\n",
+                                       iface_param->value[0]);
+               break;
+       case ISCSI_NET_PARAM_IFACE_ENABLE:
+               if (iface_param->value[0] == ISCSI_IFACE_ENABLE)
+                       ret = beiscsi_create_ipv4_iface(phba);
+               else
+                       iscsi_destroy_iface(phba->ipv4_iface);
+               break;
+       case ISCSI_NET_PARAM_IPV4_SUBNET:
+       case ISCSI_NET_PARAM_IPV4_ADDR:
+               ret = beiscsi_set_static_ip(shost, iface_param,
+                                           data, dt_len);
+               break;
+       default:
+               shost_printk(KERN_ERR, shost, "Param %d not supported\n",
+                            iface_param->param);
+       }
+
+       return ret;
+}
+
+static int
+beiscsi_set_ipv6(struct Scsi_Host *shost,
+               struct iscsi_iface_param_info *iface_param,
+               void *data, uint32_t dt_len)
+{
+       struct beiscsi_hba *phba = iscsi_host_priv(shost);
+       int ret = 0;
+
+       switch (iface_param->param) {
+       case ISCSI_NET_PARAM_IFACE_ENABLE:
+               if (iface_param->value[0] == ISCSI_IFACE_ENABLE)
+                       ret = beiscsi_create_ipv6_iface(phba);
+               else {
+                       iscsi_destroy_iface(phba->ipv6_iface);
+                       ret = 0;
+               }
+               break;
+       case ISCSI_NET_PARAM_IPV6_ADDR:
+               ret = mgmt_set_ip(phba, iface_param, NULL,
+                                 ISCSI_BOOTPROTO_STATIC);
+               break;
+       default:
+               shost_printk(KERN_ERR, shost, "Param %d not supported\n",
+                            iface_param->param);
+       }
+
+       return ret;
+}
+
+int be2iscsi_iface_set_param(struct Scsi_Host *shost,
+               void *data, uint32_t dt_len)
+{
+       struct iscsi_iface_param_info *iface_param = NULL;
+       struct nlattr *attrib;
+       uint32_t rm_len = dt_len;
+       int ret = 0 ;
+
+       nla_for_each_attr(attrib, data, dt_len, rm_len) {
+               iface_param = nla_data(attrib);
+
+               if (iface_param->param_type != ISCSI_NET_PARAM)
+                       continue;
+
+               /*
+                * BE2ISCSI only supports 1 interface
+                */
+               if (iface_param->iface_num) {
+                       shost_printk(KERN_ERR, shost, "Invalid iface_num %d."
+                                    "Only iface_num 0 is supported.\n",
+                                    iface_param->iface_num);
+                       return -EINVAL;
+               }
+
+               switch (iface_param->iface_type) {
+               case ISCSI_IFACE_TYPE_IPV4:
+                       ret = beiscsi_set_ipv4(shost, iface_param,
+                                              data, dt_len);
+                       break;
+               case ISCSI_IFACE_TYPE_IPV6:
+                       ret = beiscsi_set_ipv6(shost, iface_param,
+                                              data, dt_len);
+                       break;
+               default:
+                       shost_printk(KERN_ERR, shost,
+                                    "Invalid iface type :%d passed\n",
+                                    iface_param->iface_type);
+                       break;
+               }
+
+               if (ret)
+                       return ret;
+       }
+
+       return ret;
+}
+
+static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
+               struct iscsi_iface *iface, int param,
+               char *buf)
+{
+       struct be_cmd_get_if_info_resp if_info;
+       int len, ip_type = BE2_IPV4;
+
+       memset(&if_info, 0, sizeof(if_info));
+
+       if (iface->iface_type == ISCSI_IFACE_TYPE_IPV6)
+               ip_type = BE2_IPV6;
+
+       len = mgmt_get_if_info(phba, ip_type, &if_info);
+       if (len)
+               return len;
+
+       switch (param) {
+       case ISCSI_NET_PARAM_IPV4_ADDR:
+               len = sprintf(buf, "%pI4\n", &if_info.ip_addr.addr);
+               break;
+       case ISCSI_NET_PARAM_IPV6_ADDR:
+               len = sprintf(buf, "%pI6\n", &if_info.ip_addr.addr);
+               break;
+       case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
+               if (!if_info.dhcp_state)
+                       len = sprintf(buf, "static");
+               else
+                       len = sprintf(buf, "dhcp");
+               break;
+       case ISCSI_NET_PARAM_IPV4_SUBNET:
+               len = sprintf(buf, "%pI4\n", &if_info.ip_addr.subnet_mask);
+               break;
+       default:
+               WARN_ON(1);
+       }
+
+       return len;
+}
+
+int be2iscsi_iface_get_param(struct iscsi_iface *iface,
+               enum iscsi_param_type param_type,
+               int param, char *buf)
+{
+       struct Scsi_Host *shost = iscsi_iface_to_shost(iface);
+       struct beiscsi_hba *phba = iscsi_host_priv(shost);
+       struct be_cmd_get_def_gateway_resp gateway;
+       int len = -ENOSYS;
+
+       switch (param) {
+       case ISCSI_NET_PARAM_IPV4_ADDR:
+       case ISCSI_NET_PARAM_IPV4_SUBNET:
+       case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
+       case ISCSI_NET_PARAM_IPV6_ADDR:
+               len = be2iscsi_get_if_param(phba, iface, param, buf);
+               break;
+       case ISCSI_NET_PARAM_IFACE_ENABLE:
+               len = sprintf(buf, "enabled");
+               break;
+       case ISCSI_NET_PARAM_IPV4_GW:
+               memset(&gateway, 0, sizeof(gateway));
+               len = mgmt_get_gateway(phba, BE2_IPV4, &gateway);
+               if (!len)
+                       len = sprintf(buf, "%pI4\n", &gateway.ip_addr.addr);
+               break;
+       default:
+               len = -ENOSYS;
+       }
+
+       return len;
+}
+
 /**
  * beiscsi_ep_get_param - get the iscsi parameter
  * @ep: pointer to iscsi ep
@@ -221,7 +518,7 @@ int beiscsi_ep_get_param(struct iscsi_endpoint *ep,
        struct beiscsi_endpoint *beiscsi_ep = ep->dd_data;
        int len = 0;
 
-       SE_DEBUG(DBG_LVL_8, "In beiscsi_conn_get_param, param= %d\n", param);
+       SE_DEBUG(DBG_LVL_8, "In beiscsi_ep_get_param, param= %d\n", param);
 
        switch (param) {
        case ISCSI_PARAM_CONN_PORT:
@@ -279,6 +576,121 @@ int beiscsi_set_param(struct iscsi_cls_conn *cls_conn,
 }
 
 /**
+ * beiscsi_get_initname - Read Initiator Name from flash
+ * @buf: buffer bointer
+ * @phba: The device priv structure instance
+ *
+ * returns number of bytes
+ */
+static int beiscsi_get_initname(char *buf, struct beiscsi_hba *phba)
+{
+       int rc;
+       unsigned int tag, wrb_num;
+       unsigned short status, extd_status;
+       struct be_mcc_wrb *wrb;
+       struct be_cmd_hba_name *resp;
+       struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
+
+       tag = be_cmd_get_initname(phba);
+       if (!tag) {
+               SE_DEBUG(DBG_LVL_1, "Getting Initiator Name Failed\n");
+               return -EBUSY;
+       } else
+               wait_event_interruptible(phba->ctrl.mcc_wait[tag],
+                               phba->ctrl.mcc_numtag[tag]);
+
+       wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
+       extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
+       status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
+
+       if (status || extd_status) {
+               SE_DEBUG(DBG_LVL_1, "MailBox Command Failed with "
+                               "status = %d extd_status = %d\n",
+                               status, extd_status);
+               free_mcc_tag(&phba->ctrl, tag);
+               return -EAGAIN;
+       }
+       wrb = queue_get_wrb(mccq, wrb_num);
+       free_mcc_tag(&phba->ctrl, tag);
+       resp = embedded_payload(wrb);
+       rc = sprintf(buf, "%s\n", resp->initiator_name);
+       return rc;
+}
+
+/**
+ * beiscsi_get_port_state - Get the Port State
+ * @shost : pointer to scsi_host structure
+ *
+ * returns number of bytes
+ */
+static void beiscsi_get_port_state(struct Scsi_Host *shost)
+{
+       struct beiscsi_hba *phba = iscsi_host_priv(shost);
+       struct iscsi_cls_host *ihost = shost->shost_data;
+
+       ihost->port_state = (phba->state == BE_ADAPTER_UP) ?
+               ISCSI_PORT_STATE_UP : ISCSI_PORT_STATE_DOWN;
+}
+
+/**
+ * beiscsi_get_port_speed  - Get the Port Speed from Adapter
+ * @shost : pointer to scsi_host structure
+ *
+ * returns Success/Failure
+ */
+static int beiscsi_get_port_speed(struct Scsi_Host *shost)
+{
+       unsigned int tag, wrb_num;
+       unsigned short status, extd_status;
+       struct be_mcc_wrb *wrb;
+       struct be_cmd_ntwk_link_status_resp *resp;
+       struct beiscsi_hba *phba = iscsi_host_priv(shost);
+       struct iscsi_cls_host *ihost = shost->shost_data;
+       struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
+
+       tag = be_cmd_get_port_speed(phba);
+       if (!tag) {
+               SE_DEBUG(DBG_LVL_1, "Getting Port Speed Failed\n");
+                return -EBUSY;
+        } else
+               wait_event_interruptible(phba->ctrl.mcc_wait[tag],
+                               phba->ctrl.mcc_numtag[tag]);
+
+       wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
+       extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
+       status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
+
+       if (status || extd_status) {
+               SE_DEBUG(DBG_LVL_1, "MailBox Command Failed with "
+                               "status = %d extd_status = %d\n",
+                               status, extd_status);
+               free_mcc_tag(&phba->ctrl, tag);
+               return -EAGAIN;
+       }
+       wrb = queue_get_wrb(mccq, wrb_num);
+       free_mcc_tag(&phba->ctrl, tag);
+       resp = embedded_payload(wrb);
+
+       switch (resp->mac_speed) {
+       case BE2ISCSI_LINK_SPEED_10MBPS:
+               ihost->port_speed = ISCSI_PORT_SPEED_10MBPS;
+               break;
+       case BE2ISCSI_LINK_SPEED_100MBPS:
+               ihost->port_speed = BE2ISCSI_LINK_SPEED_100MBPS;
+               break;
+       case BE2ISCSI_LINK_SPEED_1GBPS:
+               ihost->port_speed = ISCSI_PORT_SPEED_1GBPS;
+               break;
+       case BE2ISCSI_LINK_SPEED_10GBPS:
+               ihost->port_speed = ISCSI_PORT_SPEED_10GBPS;
+               break;
+       default:
+               ihost->port_speed = ISCSI_PORT_SPEED_UNKNOWN;
+       }
+       return 0;
+}
+
+/**
  * beiscsi_get_host_param - get the iscsi parameter
  * @shost: pointer to scsi_host structure
  * @param: parameter type identifier
@@ -301,6 +713,27 @@ int beiscsi_get_host_param(struct Scsi_Host *shost,
                        return status;
                }
                break;
+       case ISCSI_HOST_PARAM_INITIATOR_NAME:
+               status = beiscsi_get_initname(buf, phba);
+               if (status < 0) {
+                       SE_DEBUG(DBG_LVL_1,
+                                       "Retreiving Initiator Name Failed\n");
+                       return status;
+               }
+               break;
+       case ISCSI_HOST_PARAM_PORT_STATE:
+               beiscsi_get_port_state(shost);
+               status = sprintf(buf, "%s\n", iscsi_get_port_state_name(shost));
+               break;
+       case ISCSI_HOST_PARAM_PORT_SPEED:
+               status = beiscsi_get_port_speed(shost);
+               if (status) {
+                       SE_DEBUG(DBG_LVL_1,
+                                       "Retreiving Port Speed Failed\n");
+                       return status;
+               }
+               status = sprintf(buf, "%s\n", iscsi_get_port_speed_name(shost));
+               break;
        default:
                return iscsi_host_get_param(shost, param, buf);
        }
@@ -309,46 +742,21 @@ int beiscsi_get_host_param(struct Scsi_Host *shost,
 
 int beiscsi_get_macaddr(char *buf, struct beiscsi_hba *phba)
 {
-       struct be_cmd_resp_get_mac_addr *resp;
-       struct be_mcc_wrb *wrb;
-       unsigned int tag, wrb_num;
-       unsigned short status, extd_status;
-       struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
+       struct be_cmd_get_nic_conf_resp resp;
        int rc;
 
-       if (phba->read_mac_address)
-               return sysfs_format_mac(buf, phba->mac_address,
-                                       ETH_ALEN);
+       if (strlen(phba->mac_address))
+               return strlcpy(buf, phba->mac_address, PAGE_SIZE);
 
-       tag = be_cmd_get_mac_addr(phba);
-       if (!tag) {
-               SE_DEBUG(DBG_LVL_1, "be_cmd_get_mac_addr Failed\n");
-               return -EBUSY;
-       } else
-               wait_event_interruptible(phba->ctrl.mcc_wait[tag],
-                                        phba->ctrl.mcc_numtag[tag]);
+       memset(&resp, 0, sizeof(resp));
+       rc = mgmt_get_nic_conf(phba, &resp);
+       if (rc)
+               return rc;
 
-       wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
-       extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
-       status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
-       if (status || extd_status) {
-               SE_DEBUG(DBG_LVL_1, "Failed to get be_cmd_get_mac_addr"
-                                   " status = %d extd_status = %d\n",
-                                   status, extd_status);
-               free_mcc_tag(&phba->ctrl, tag);
-               return -EAGAIN;
-       }
-       wrb = queue_get_wrb(mccq, wrb_num);
-       free_mcc_tag(&phba->ctrl, tag);
-       resp = embedded_payload(wrb);
-       memcpy(phba->mac_address, resp->mac_address, ETH_ALEN);
-       rc = sysfs_format_mac(buf, phba->mac_address,
-                              ETH_ALEN);
-       phba->read_mac_address = 1;
-       return rc;
+       memcpy(phba->mac_address, resp.mac_address, ETH_ALEN);
+       return sysfs_format_mac(buf, phba->mac_address, ETH_ALEN);
 }
 
-
 /**
  * beiscsi_conn_get_stats - get the iscsi stats
  * @cls_conn: pointer to iscsi cls conn
@@ -736,11 +1144,24 @@ void beiscsi_ep_disconnect(struct iscsi_endpoint *ep)
 umode_t be2iscsi_attr_is_visible(int param_type, int param)
 {
        switch (param_type) {
+       case ISCSI_NET_PARAM:
+               switch (param) {
+               case ISCSI_NET_PARAM_IFACE_ENABLE:
+               case ISCSI_NET_PARAM_IPV4_ADDR:
+               case ISCSI_NET_PARAM_IPV4_SUBNET:
+               case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
+               case ISCSI_NET_PARAM_IPV4_GW:
+               case ISCSI_NET_PARAM_IPV6_ADDR:
+                       return S_IRUGO;
+               default:
+                       return 0;
+               }
        case ISCSI_HOST_PARAM:
                switch (param) {
                case ISCSI_HOST_PARAM_HWADDRESS:
-               case ISCSI_HOST_PARAM_IPADDRESS:
                case ISCSI_HOST_PARAM_INITIATOR_NAME:
+               case ISCSI_HOST_PARAM_PORT_STATE:
+               case ISCSI_HOST_PARAM_PORT_SPEED:
                        return S_IRUGO;
                default:
                        return 0;
index 5c45be1..8b826fc 100644 (file)
 
 #define BE2_IPV4  0x1
 #define BE2_IPV6  0x10
+#define BE2_DHCP_V4 0x05
+
+#define NON_BLOCKING 0x0
+#define BLOCKING 0x1
+
+void beiscsi_create_def_ifaces(struct beiscsi_hba *phba);
+
+void beiscsi_destroy_def_ifaces(struct beiscsi_hba *phba);
+
+int be2iscsi_iface_get_param(struct iscsi_iface *iface,
+                            enum iscsi_param_type param_type,
+                            int param, char *buf);
+
+int be2iscsi_iface_set_param(struct Scsi_Host *shost,
+                            void *data, uint32_t count);
 
 umode_t be2iscsi_attr_is_visible(int param_type, int param);
 
index 375756f..0b1d99c 100644 (file)
 #include <linux/semaphore.h>
 #include <linux/iscsi_boot_sysfs.h>
 #include <linux/module.h>
+#include <linux/bsg-lib.h>
 
 #include <scsi/libiscsi.h>
+#include <scsi/scsi_bsg_iscsi.h>
+#include <scsi/scsi_netlink.h>
 #include <scsi/scsi_transport_iscsi.h>
 #include <scsi/scsi_transport.h>
 #include <scsi/scsi_cmnd.h>
@@ -48,7 +51,8 @@ static unsigned int num_hba = 0;
 
 MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
 MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
-MODULE_AUTHOR("ServerEngines Corporation");
+MODULE_VERSION(BUILD_STR);
+MODULE_AUTHOR("Emulex Corporation");
 MODULE_LICENSE("GPL");
 module_param(be_iopoll_budget, int, 0);
 module_param(enable_msix, int, 0);
@@ -147,15 +151,15 @@ static int beiscsi_eh_device_reset(struct scsi_cmnd *sc)
        struct invalidate_command_table *inv_tbl;
        struct be_dma_mem nonemb_cmd;
        unsigned int cid, tag, i, num_invalidate;
-       int rc = FAILED;
 
        /* invalidate iocbs */
        cls_session = starget_to_session(scsi_target(sc->device));
        session = cls_session->dd_data;
        spin_lock_bh(&session->lock);
-       if (!session->leadconn || session->state != ISCSI_STATE_LOGGED_IN)
-               goto unlock;
-
+       if (!session->leadconn || session->state != ISCSI_STATE_LOGGED_IN) {
+               spin_unlock_bh(&session->lock);
+               return FAILED;
+       }
        conn = session->leadconn;
        beiscsi_conn = conn->dd_data;
        phba = beiscsi_conn->phba;
@@ -208,9 +212,6 @@ static int beiscsi_eh_device_reset(struct scsi_cmnd *sc)
        pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
                            nonemb_cmd.va, nonemb_cmd.dma);
        return iscsi_eh_device_reset(sc);
-unlock:
-       spin_unlock_bh(&session->lock);
-       return rc;
 }
 
 static ssize_t beiscsi_show_boot_tgt_info(void *data, int type, char *buf)
@@ -230,10 +231,10 @@ static ssize_t beiscsi_show_boot_tgt_info(void *data, int type, char *buf)
        case ISCSI_BOOT_TGT_IP_ADDR:
                if (boot_conn->dest_ipaddr.ip_type == 0x1)
                        rc = sprintf(buf, "%pI4\n",
-                               (char *)&boot_conn->dest_ipaddr.ip_address);
+                               (char *)&boot_conn->dest_ipaddr.addr);
                else
                        rc = sprintf(str, "%pI6\n",
-                               (char *)&boot_conn->dest_ipaddr.ip_address);
+                               (char *)&boot_conn->dest_ipaddr.addr);
                break;
        case ISCSI_BOOT_TGT_PORT:
                rc = sprintf(str, "%d\n", boot_conn->dest_port);
@@ -311,12 +312,8 @@ static ssize_t beiscsi_show_boot_eth_info(void *data, int type, char *buf)
                rc = sprintf(str, "0\n");
                break;
        case ISCSI_BOOT_ETH_MAC:
-               rc  = beiscsi_get_macaddr(buf, phba);
-               if (rc < 0) {
-                       SE_DEBUG(DBG_LVL_1, "beiscsi_get_macaddr Failed\n");
-                       return rc;
-               }
-       break;
+               rc  = beiscsi_get_macaddr(str, phba);
+               break;
        default:
                rc = -ENOSYS;
                break;
@@ -394,7 +391,7 @@ MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
 
 static struct scsi_host_template beiscsi_sht = {
        .module = THIS_MODULE,
-       .name = "ServerEngines 10Gbe open-iscsi Initiator Driver",
+       .name = "Emulex 10Gbe open-iscsi Initiator Driver",
        .proc_name = DRV_NAME,
        .queuecommand = iscsi_queuecommand,
        .change_queue_depth = iscsi_change_queue_depth,
@@ -409,6 +406,8 @@ static struct scsi_host_template beiscsi_sht = {
        .max_sectors = BEISCSI_MAX_SECTORS,
        .cmd_per_lun = BEISCSI_CMD_PER_LUN,
        .use_clustering = ENABLE_CLUSTERING,
+       .vendor_id = SCSI_NL_VID_TYPE_PCI | BE_VENDOR_ID,
+
 };
 
 static struct scsi_transport_template *beiscsi_scsi_transport;
@@ -435,6 +434,7 @@ static struct beiscsi_hba *beiscsi_hba_alloc(struct pci_dev *pcidev)
        phba->shost = shost;
        phba->pcidev = pci_dev_get(pcidev);
        pci_set_drvdata(pcidev, phba);
+       phba->interface_handle = 0xFFFFFFFF;
 
        if (iscsi_host_add(shost, &phba->pcidev->dev))
                goto free_devices;
@@ -544,8 +544,7 @@ static int be_ctrl_init(struct beiscsi_hba *phba, struct pci_dev *pdev)
                                                  &mbox_mem_alloc->dma);
        if (!mbox_mem_alloc->va) {
                beiscsi_unmap_pci_function(phba);
-               status = -ENOMEM;
-               return status;
+               return -ENOMEM;
        }
 
        mbox_mem_align->size = sizeof(struct be_mcc_mailbox);
@@ -1252,9 +1251,9 @@ hwi_complete_drvr_msgs(struct beiscsi_conn *beiscsi_conn,
        task = pwrb_handle->pio_handle;
 
        io_task = task->dd_data;
-       spin_lock(&phba->mgmt_sgl_lock);
+       spin_lock_bh(&phba->mgmt_sgl_lock);
        free_mgmt_sgl_handle(phba, io_task->psgl_handle);
-       spin_unlock(&phba->mgmt_sgl_lock);
+       spin_unlock_bh(&phba->mgmt_sgl_lock);
        spin_lock_bh(&session->lock);
        free_wrb_handle(phba, pwrb_context, pwrb_handle);
        spin_unlock_bh(&session->lock);
@@ -1370,8 +1369,6 @@ hwi_get_async_handle(struct beiscsi_hba *phba,
        struct be_bus_address phys_addr;
        struct list_head *pbusy_list;
        struct async_pdu_handle *pasync_handle = NULL;
-       int buffer_len = 0;
-       unsigned char buffer_index = -1;
        unsigned char is_header = 0;
 
        phys_addr.u.a32.address_lo =
@@ -1392,22 +1389,11 @@ hwi_get_async_handle(struct beiscsi_hba *phba,
                pbusy_list = hwi_get_async_busy_list(pasync_ctx, 1,
                        (pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe,
                        index) / 32] & PDUCQE_INDEX_MASK));
-
-               buffer_len = (unsigned int)(phys_addr.u.a64.address -
-                               pasync_ctx->async_header.pa_base.u.a64.address);
-
-               buffer_index = buffer_len /
-                               pasync_ctx->async_header.buffer_size;
-
                break;
        case UNSOL_DATA_NOTIFY:
                pbusy_list = hwi_get_async_busy_list(pasync_ctx, 0, (pdpdu_cqe->
                                        dw[offsetof(struct amap_i_t_dpdu_cqe,
                                        index) / 32] & PDUCQE_INDEX_MASK));
-               buffer_len = (unsigned long)(phys_addr.u.a64.address -
-                                       pasync_ctx->async_data.pa_base.u.
-                                       a64.address);
-               buffer_index = buffer_len / pasync_ctx->async_data.buffer_size;
                break;
        default:
                pbusy_list = NULL;
@@ -1418,11 +1404,9 @@ hwi_get_async_handle(struct beiscsi_hba *phba,
                return NULL;
        }
 
-       WARN_ON(!(buffer_index <= pasync_ctx->async_data.num_entries));
        WARN_ON(list_empty(pbusy_list));
        list_for_each_entry(pasync_handle, pbusy_list, link) {
-               WARN_ON(pasync_handle->consumed);
-               if (pasync_handle->index == buffer_index)
+               if (pasync_handle->pa.u.a64.address == phys_addr.u.a64.address)
                        break;
        }
 
@@ -1449,15 +1433,13 @@ hwi_update_async_writables(struct hwi_async_pdu_context *pasync_ctx,
        unsigned int num_entries, writables = 0;
        unsigned int *pep_read_ptr, *pwritables;
 
-
+       num_entries = pasync_ctx->num_entries;
        if (is_header) {
                pep_read_ptr = &pasync_ctx->async_header.ep_read_ptr;
                pwritables = &pasync_ctx->async_header.writables;
-               num_entries = pasync_ctx->async_header.num_entries;
        } else {
                pep_read_ptr = &pasync_ctx->async_data.ep_read_ptr;
                pwritables = &pasync_ctx->async_data.writables;
-               num_entries = pasync_ctx->async_data.num_entries;
        }
 
        while ((*pep_read_ptr) != cq_index) {
@@ -1491,14 +1473,13 @@ hwi_update_async_writables(struct hwi_async_pdu_context *pasync_ctx,
        return 0;
 }
 
-static unsigned int hwi_free_async_msg(struct beiscsi_hba *phba,
+static void hwi_free_async_msg(struct beiscsi_hba *phba,
                                       unsigned int cri)
 {
        struct hwi_controller *phwi_ctrlr;
        struct hwi_async_pdu_context *pasync_ctx;
        struct async_pdu_handle *pasync_handle, *tmp_handle;
        struct list_head *plist;
-       unsigned int i = 0;
 
        phwi_ctrlr = phba->phwi_ctrlr;
        pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr);
@@ -1508,23 +1489,20 @@ static unsigned int hwi_free_async_msg(struct beiscsi_hba *phba,
        list_for_each_entry_safe(pasync_handle, tmp_handle, plist, link) {
                list_del(&pasync_handle->link);
 
-               if (i == 0) {
+               if (pasync_handle->is_header) {
                        list_add_tail(&pasync_handle->link,
                                      &pasync_ctx->async_header.free_list);
                        pasync_ctx->async_header.free_entries++;
-                       i++;
                } else {
                        list_add_tail(&pasync_handle->link,
                                      &pasync_ctx->async_data.free_list);
                        pasync_ctx->async_data.free_entries++;
-                       i++;
                }
        }
 
        INIT_LIST_HEAD(&pasync_ctx->async_entry[cri].wait_queue.list);
        pasync_ctx->async_entry[cri].wait_queue.hdr_received = 0;
        pasync_ctx->async_entry[cri].wait_queue.bytes_received = 0;
-       return 0;
 }
 
 static struct phys_addr *
@@ -1557,16 +1535,15 @@ static void hwi_post_async_buffers(struct beiscsi_hba *phba,
 
        phwi_ctrlr = phba->phwi_ctrlr;
        pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr);
+       num_entries = pasync_ctx->num_entries;
 
        if (is_header) {
-               num_entries = pasync_ctx->async_header.num_entries;
                writables = min(pasync_ctx->async_header.writables,
                                pasync_ctx->async_header.free_entries);
                pfree_link = pasync_ctx->async_header.free_list.next;
                host_write_num = pasync_ctx->async_header.host_write_ptr;
                ring_id = phwi_ctrlr->default_pdu_hdr.id;
        } else {
-               num_entries = pasync_ctx->async_data.num_entries;
                writables = min(pasync_ctx->async_data.writables,
                                pasync_ctx->async_data.free_entries);
                pfree_link = pasync_ctx->async_data.free_list.next;
@@ -1673,7 +1650,7 @@ hwi_fwd_async_msg(struct beiscsi_conn *beiscsi_conn,
                        }
                        memcpy(pfirst_buffer + offset,
                               pasync_handle->pbuffer, buf_len);
-                       offset = buf_len;
+                       offset += buf_len;
                }
                index++;
        }
@@ -1682,10 +1659,9 @@ hwi_fwd_async_msg(struct beiscsi_conn *beiscsi_conn,
                                           (beiscsi_conn->beiscsi_conn_cid -
                                            phba->fw_config.iscsi_cid_start),
                                            phdr, hdr_len, pfirst_buffer,
-                                           buf_len);
+                                           offset);
 
-       if (status == 0)
-               hwi_free_async_msg(phba, cri);
+       hwi_free_async_msg(phba, cri);
        return 0;
 }
 
@@ -2229,7 +2205,7 @@ static int beiscsi_alloc_mem(struct beiscsi_hba *phba)
        struct mem_array *mem_arr, *mem_arr_orig;
        unsigned int i, j, alloc_size, curr_alloc_size;
 
-       phba->phwi_ctrlr = kmalloc(phba->params.hwi_ws_sz, GFP_KERNEL);
+       phba->phwi_ctrlr = kzalloc(phba->params.hwi_ws_sz, GFP_KERNEL);
        if (!phba->phwi_ctrlr)
                return -ENOMEM;
 
@@ -2349,27 +2325,21 @@ static void iscsi_init_global_templates(struct beiscsi_hba *phba)
        AMAP_SET_BITS(struct amap_pdu_nop_out, i_bit, pnop_out, 0);
 }
 
-static void beiscsi_init_wrb_handle(struct beiscsi_hba *phba)
+static int beiscsi_init_wrb_handle(struct beiscsi_hba *phba)
 {
        struct be_mem_descriptor *mem_descr_wrbh, *mem_descr_wrb;
-       struct wrb_handle *pwrb_handle;
+       struct wrb_handle *pwrb_handle = NULL;
        struct hwi_controller *phwi_ctrlr;
        struct hwi_wrb_context *pwrb_context;
-       struct iscsi_wrb *pwrb;
-       unsigned int num_cxn_wrbh;
-       unsigned int num_cxn_wrb, j, idx, index;
+       struct iscsi_wrb *pwrb = NULL;
+       unsigned int num_cxn_wrbh = 0;
+       unsigned int num_cxn_wrb = 0, j, idx = 0, index;
 
        mem_descr_wrbh = phba->init_mem;
        mem_descr_wrbh += HWI_MEM_WRBH;
 
        mem_descr_wrb = phba->init_mem;
        mem_descr_wrb += HWI_MEM_WRB;
-
-       idx = 0;
-       pwrb_handle = mem_descr_wrbh->mem_array[idx].virtual_address;
-       num_cxn_wrbh = ((mem_descr_wrbh->mem_array[idx].size) /
-                       ((sizeof(struct wrb_handle)) *
-                        phba->params.wrbs_per_cxn));
        phwi_ctrlr = phba->phwi_ctrlr;
 
        for (index = 0; index < phba->params.cxns_per_ctrl * 2; index += 2) {
@@ -2377,12 +2347,32 @@ static void beiscsi_init_wrb_handle(struct beiscsi_hba *phba)
                pwrb_context->pwrb_handle_base =
                                kzalloc(sizeof(struct wrb_handle *) *
                                        phba->params.wrbs_per_cxn, GFP_KERNEL);
+               if (!pwrb_context->pwrb_handle_base) {
+                       shost_printk(KERN_ERR, phba->shost,
+                                       "Mem Alloc Failed. Failing to load\n");
+                       goto init_wrb_hndl_failed;
+               }
                pwrb_context->pwrb_handle_basestd =
                                kzalloc(sizeof(struct wrb_handle *) *
                                        phba->params.wrbs_per_cxn, GFP_KERNEL);
+               if (!pwrb_context->pwrb_handle_basestd) {
+                       shost_printk(KERN_ERR, phba->shost,
+                                       "Mem Alloc Failed. Failing to load\n");
+                       goto init_wrb_hndl_failed;
+               }
+               if (!num_cxn_wrbh) {
+                       pwrb_handle =
+                               mem_descr_wrbh->mem_array[idx].virtual_address;
+                       num_cxn_wrbh = ((mem_descr_wrbh->mem_array[idx].size) /
+                                       ((sizeof(struct wrb_handle)) *
+                                        phba->params.wrbs_per_cxn));
+                       idx++;
+               }
+               pwrb_context->alloc_index = 0;
+               pwrb_context->wrb_handles_available = 0;
+               pwrb_context->free_index = 0;
+
                if (num_cxn_wrbh) {
-                       pwrb_context->alloc_index = 0;
-                       pwrb_context->wrb_handles_available = 0;
                        for (j = 0; j < phba->params.wrbs_per_cxn; j++) {
                                pwrb_context->pwrb_handle_base[j] = pwrb_handle;
                                pwrb_context->pwrb_handle_basestd[j] =
@@ -2391,49 +2381,21 @@ static void beiscsi_init_wrb_handle(struct beiscsi_hba *phba)
                                pwrb_handle->wrb_index = j;
                                pwrb_handle++;
                        }
-                       pwrb_context->free_index = 0;
-                       num_cxn_wrbh--;
-               } else {
-                       idx++;
-                       pwrb_handle =
-                           mem_descr_wrbh->mem_array[idx].virtual_address;
-                       num_cxn_wrbh =
-                           ((mem_descr_wrbh->mem_array[idx].size) /
-                            ((sizeof(struct wrb_handle)) *
-                             phba->params.wrbs_per_cxn));
-                       pwrb_context->alloc_index = 0;
-                       for (j = 0; j < phba->params.wrbs_per_cxn; j++) {
-                               pwrb_context->pwrb_handle_base[j] = pwrb_handle;
-                               pwrb_context->pwrb_handle_basestd[j] =
-                                   pwrb_handle;
-                               pwrb_context->wrb_handles_available++;
-                               pwrb_handle->wrb_index = j;
-                               pwrb_handle++;
-                       }
-                       pwrb_context->free_index = 0;
                        num_cxn_wrbh--;
                }
        }
        idx = 0;
-       pwrb = mem_descr_wrb->mem_array[idx].virtual_address;
-       num_cxn_wrb = (mem_descr_wrb->mem_array[idx].size) /
-                     ((sizeof(struct iscsi_wrb) *
-                       phba->params.wrbs_per_cxn));
        for (index = 0; index < phba->params.cxns_per_ctrl * 2; index += 2) {
                pwrb_context = &phwi_ctrlr->wrb_context[index];
-               if (num_cxn_wrb) {
-                       for (j = 0; j < phba->params.wrbs_per_cxn; j++) {
-                               pwrb_handle = pwrb_context->pwrb_handle_base[j];
-                               pwrb_handle->pwrb = pwrb;
-                               pwrb++;
-                       }
-                       num_cxn_wrb--;
-               } else {
-                       idx++;
+               if (!num_cxn_wrb) {
                        pwrb = mem_descr_wrb->mem_array[idx].virtual_address;
                        num_cxn_wrb = (mem_descr_wrb->mem_array[idx].size) /
-                                     ((sizeof(struct iscsi_wrb) *
-                                       phba->params.wrbs_per_cxn));
+                               ((sizeof(struct iscsi_wrb) *
+                                 phba->params.wrbs_per_cxn));
+                       idx++;
+               }
+
+               if (num_cxn_wrb) {
                        for (j = 0; j < phba->params.wrbs_per_cxn; j++) {
                                pwrb_handle = pwrb_context->pwrb_handle_base[j];
                                pwrb_handle->pwrb = pwrb;
@@ -2442,6 +2404,14 @@ static void beiscsi_init_wrb_handle(struct beiscsi_hba *phba)
                        num_cxn_wrb--;
                }
        }
+       return 0;
+init_wrb_hndl_failed:
+       for (j = index; j > 0; j--) {
+               pwrb_context = &phwi_ctrlr->wrb_context[j];
+               kfree(pwrb_context->pwrb_handle_base);
+               kfree(pwrb_context->pwrb_handle_basestd);
+       }
+       return -ENOMEM;
 }
 
 static void hwi_init_async_pdu_ctx(struct beiscsi_hba *phba)
@@ -2450,7 +2420,7 @@ static void hwi_init_async_pdu_ctx(struct beiscsi_hba *phba)
        struct hba_parameters *p = &phba->params;
        struct hwi_async_pdu_context *pasync_ctx;
        struct async_pdu_handle *pasync_header_h, *pasync_data_h;
-       unsigned int index;
+       unsigned int index, idx, num_per_mem, num_async_data;
        struct be_mem_descriptor *mem_descr;
 
        mem_descr = (struct be_mem_descriptor *)phba->init_mem;
@@ -2462,10 +2432,8 @@ static void hwi_init_async_pdu_ctx(struct beiscsi_hba *phba)
        pasync_ctx = phwi_ctrlr->phwi_ctxt->pasync_ctx;
        memset(pasync_ctx, 0, sizeof(*pasync_ctx));
 
-       pasync_ctx->async_header.num_entries = p->asyncpdus_per_ctrl;
-       pasync_ctx->async_header.buffer_size = p->defpdu_hdr_sz;
-       pasync_ctx->async_data.buffer_size = p->defpdu_data_sz;
-       pasync_ctx->async_data.num_entries = p->asyncpdus_per_ctrl;
+       pasync_ctx->num_entries = p->asyncpdus_per_ctrl;
+       pasync_ctx->buffer_size = p->defpdu_hdr_sz;
 
        mem_descr = (struct be_mem_descriptor *)phba->init_mem;
        mem_descr += HWI_MEM_ASYNC_HEADER_BUF;
@@ -2510,19 +2478,6 @@ static void hwi_init_async_pdu_ctx(struct beiscsi_hba *phba)
        pasync_ctx->async_header.writables = 0;
        INIT_LIST_HEAD(&pasync_ctx->async_header.free_list);
 
-       mem_descr = (struct be_mem_descriptor *)phba->init_mem;
-       mem_descr += HWI_MEM_ASYNC_DATA_BUF;
-       if (mem_descr->mem_array[0].virtual_address) {
-               SE_DEBUG(DBG_LVL_8,
-                        "hwi_init_async_pdu_ctx HWI_MEM_ASYNC_DATA_BUF"
-                        "va=%p\n", mem_descr->mem_array[0].virtual_address);
-       } else
-               shost_printk(KERN_WARNING, phba->shost,
-                           "No Virtual address\n");
-       pasync_ctx->async_data.va_base =
-                       mem_descr->mem_array[0].virtual_address;
-       pasync_ctx->async_data.pa_base.u.a64.address =
-                       mem_descr->mem_array[0].bus_address.u.a64.address;
 
        mem_descr = (struct be_mem_descriptor *)phba->init_mem;
        mem_descr += HWI_MEM_ASYNC_DATA_RING;
@@ -2553,6 +2508,25 @@ static void hwi_init_async_pdu_ctx(struct beiscsi_hba *phba)
        pasync_data_h =
                (struct async_pdu_handle *)pasync_ctx->async_data.handle_base;
 
+       mem_descr = (struct be_mem_descriptor *)phba->init_mem;
+       mem_descr += HWI_MEM_ASYNC_DATA_BUF;
+       if (mem_descr->mem_array[0].virtual_address) {
+               SE_DEBUG(DBG_LVL_8,
+                        "hwi_init_async_pdu_ctx HWI_MEM_ASYNC_DATA_BUF"
+                        "va=%p\n", mem_descr->mem_array[0].virtual_address);
+       } else
+               shost_printk(KERN_WARNING, phba->shost,
+                           "No Virtual address\n");
+       idx = 0;
+       pasync_ctx->async_data.va_base =
+                       mem_descr->mem_array[idx].virtual_address;
+       pasync_ctx->async_data.pa_base.u.a64.address =
+                       mem_descr->mem_array[idx].bus_address.u.a64.address;
+
+       num_async_data = ((mem_descr->mem_array[idx].size) /
+                               phba->params.defpdu_data_sz);
+       num_per_mem = 0;
+
        for (index = 0; index < p->asyncpdus_per_ctrl; index++) {
                pasync_header_h->cri = -1;
                pasync_header_h->index = (char)index;
@@ -2578,14 +2552,29 @@ static void hwi_init_async_pdu_ctx(struct beiscsi_hba *phba)
                pasync_data_h->cri = -1;
                pasync_data_h->index = (char)index;
                INIT_LIST_HEAD(&pasync_data_h->link);
+
+               if (!num_async_data) {
+                       num_per_mem = 0;
+                       idx++;
+                       pasync_ctx->async_data.va_base =
+                               mem_descr->mem_array[idx].virtual_address;
+                       pasync_ctx->async_data.pa_base.u.a64.address =
+                               mem_descr->mem_array[idx].
+                               bus_address.u.a64.address;
+
+                       num_async_data = ((mem_descr->mem_array[idx].size) /
+                                       phba->params.defpdu_data_sz);
+               }
                pasync_data_h->pbuffer =
                        (void *)((unsigned long)
                        (pasync_ctx->async_data.va_base) +
-                       (p->defpdu_data_sz * index));
+                       (p->defpdu_data_sz * num_per_mem));
 
                pasync_data_h->pa.u.a64.address =
                    pasync_ctx->async_data.pa_base.u.a64.address +
-                   (p->defpdu_data_sz * index);
+                   (p->defpdu_data_sz * num_per_mem);
+               num_per_mem++;
+               num_async_data--;
 
                list_add_tail(&pasync_data_h->link,
                              &pasync_ctx->async_data.free_list);
@@ -2913,9 +2902,11 @@ beiscsi_post_pages(struct beiscsi_hba *phba)
 static void be_queue_free(struct beiscsi_hba *phba, struct be_queue_info *q)
 {
        struct be_dma_mem *mem = &q->dma_mem;
-       if (mem->va)
+       if (mem->va) {
                pci_free_consistent(phba->pcidev, mem->size,
                        mem->va, mem->dma);
+               mem->va = NULL;
+       }
 }
 
 static int be_queue_alloc(struct beiscsi_hba *phba, struct be_queue_info *q,
@@ -3215,7 +3206,7 @@ static int hwi_init_port(struct beiscsi_hba *phba)
 error:
        shost_printk(KERN_ERR, phba->shost, "hwi_init_port failed");
        hwi_cleanup(phba);
-       return -ENOMEM;
+       return status;
 }
 
 static int hwi_init_controller(struct beiscsi_hba *phba)
@@ -3236,7 +3227,9 @@ static int hwi_init_controller(struct beiscsi_hba *phba)
        }
 
        iscsi_init_global_templates(phba);
-       beiscsi_init_wrb_handle(phba);
+       if (beiscsi_init_wrb_handle(phba))
+               return -ENOMEM;
+
        hwi_init_async_pdu_ctx(phba);
        if (hwi_init_port(phba) != 0) {
                shost_printk(KERN_ERR, phba->shost,
@@ -3288,7 +3281,7 @@ static int beiscsi_init_controller(struct beiscsi_hba *phba)
 
 free_init:
        beiscsi_free_mem(phba);
-       return -ENOMEM;
+       return ret;
 }
 
 static int beiscsi_init_sgl_handle(struct beiscsi_hba *phba)
@@ -3475,8 +3468,8 @@ static void hwi_disable_intr(struct beiscsi_hba *phba)
 
 static int beiscsi_get_boot_info(struct beiscsi_hba *phba)
 {
-       struct be_cmd_resp_get_boot_target *boot_resp;
-       struct be_cmd_resp_get_session *session_resp;
+       struct be_cmd_get_boot_target_resp *boot_resp;
+       struct be_cmd_get_session_resp *session_resp;
        struct be_mcc_wrb *wrb;
        struct be_dma_mem nonemb_cmd;
        unsigned int tag, wrb_num;
@@ -3484,9 +3477,9 @@ static int beiscsi_get_boot_info(struct beiscsi_hba *phba)
        struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
        int ret = -ENOMEM;
 
-       tag = beiscsi_get_boot_target(phba);
+       tag = mgmt_get_boot_target(phba);
        if (!tag) {
-               SE_DEBUG(DBG_LVL_1, "be_cmd_get_mac_addr Failed\n");
+               SE_DEBUG(DBG_LVL_1, "beiscsi_get_boot_info Failed\n");
                return -EAGAIN;
        } else
                wait_event_interruptible(phba->ctrl.mcc_wait[tag],
@@ -3496,7 +3489,7 @@ static int beiscsi_get_boot_info(struct beiscsi_hba *phba)
        extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
        status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
        if (status || extd_status) {
-               SE_DEBUG(DBG_LVL_1, "be_cmd_get_mac_addr Failed"
+               SE_DEBUG(DBG_LVL_1, "beiscsi_get_boot_info Failed"
                                    " status = %d extd_status = %d\n",
                                    status, extd_status);
                free_mcc_tag(&phba->ctrl, tag);
@@ -3522,8 +3515,8 @@ static int beiscsi_get_boot_info(struct beiscsi_hba *phba)
        }
 
        memset(nonemb_cmd.va, 0, sizeof(*session_resp));
-       tag = beiscsi_get_session_info(phba,
-               boot_resp->boot_session_handle, &nonemb_cmd);
+       tag = mgmt_get_session_info(phba, boot_resp->boot_session_handle,
+                                   &nonemb_cmd);
        if (!tag) {
                SE_DEBUG(DBG_LVL_1, "beiscsi_get_session_info"
                        " Failed\n");
@@ -3696,6 +3689,57 @@ static void beiscsi_clean_port(struct beiscsi_hba *phba)
        kfree(phba->ep_array);
 }
 
+static void beiscsi_cleanup_task(struct iscsi_task *task)
+{
+       struct beiscsi_io_task *io_task = task->dd_data;
+       struct iscsi_conn *conn = task->conn;
+       struct beiscsi_conn *beiscsi_conn = conn->dd_data;
+       struct beiscsi_hba *phba = beiscsi_conn->phba;
+       struct beiscsi_session *beiscsi_sess = beiscsi_conn->beiscsi_sess;
+       struct hwi_wrb_context *pwrb_context;
+       struct hwi_controller *phwi_ctrlr;
+
+       phwi_ctrlr = phba->phwi_ctrlr;
+       pwrb_context = &phwi_ctrlr->wrb_context[beiscsi_conn->beiscsi_conn_cid
+                       - phba->fw_config.iscsi_cid_start];
+
+       if (io_task->cmd_bhs) {
+               pci_pool_free(beiscsi_sess->bhs_pool, io_task->cmd_bhs,
+                             io_task->bhs_pa.u.a64.address);
+               io_task->cmd_bhs = NULL;
+       }
+
+       if (task->sc) {
+               if (io_task->pwrb_handle) {
+                       free_wrb_handle(phba, pwrb_context,
+                                       io_task->pwrb_handle);
+                       io_task->pwrb_handle = NULL;
+               }
+
+               if (io_task->psgl_handle) {
+                       spin_lock(&phba->io_sgl_lock);
+                       free_io_sgl_handle(phba, io_task->psgl_handle);
+                       spin_unlock(&phba->io_sgl_lock);
+                       io_task->psgl_handle = NULL;
+               }
+       } else {
+               if (!beiscsi_conn->login_in_progress) {
+                       if (io_task->pwrb_handle) {
+                               free_wrb_handle(phba, pwrb_context,
+                                               io_task->pwrb_handle);
+                               io_task->pwrb_handle = NULL;
+                       }
+                       if (io_task->psgl_handle) {
+                               spin_lock(&phba->mgmt_sgl_lock);
+                               free_mgmt_sgl_handle(phba,
+                                                    io_task->psgl_handle);
+                               spin_unlock(&phba->mgmt_sgl_lock);
+                               io_task->psgl_handle = NULL;
+                       }
+               }
+       }
+}
+
 void
 beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn,
                           struct beiscsi_offload_params *params)
@@ -3704,12 +3748,19 @@ beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn,
        struct iscsi_target_context_update_wrb *pwrb = NULL;
        struct be_mem_descriptor *mem_descr;
        struct beiscsi_hba *phba = beiscsi_conn->phba;
+       struct iscsi_task *task = beiscsi_conn->task;
+       struct iscsi_session *session = task->conn->session;
        u32 doorbell = 0;
 
        /*
         * We can always use 0 here because it is reserved by libiscsi for
         * login/startup related tasks.
         */
+       beiscsi_conn->login_in_progress = 0;
+       spin_lock_bh(&session->lock);
+       beiscsi_cleanup_task(task);
+       spin_unlock_bh(&session->lock);
+
        pwrb_handle = alloc_wrb_handle(phba, (beiscsi_conn->beiscsi_conn_cid -
                                       phba->fw_config.iscsi_cid_start));
        pwrb = (struct iscsi_target_context_update_wrb *)pwrb_handle->pwrb;
@@ -3823,7 +3874,7 @@ static int beiscsi_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
        task->hdr = (struct iscsi_hdr *)&io_task->cmd_bhs->iscsi_hdr;
        task->hdr_max = sizeof(struct be_cmd_bhs);
        io_task->psgl_handle = NULL;
-       io_task->psgl_handle = NULL;
+       io_task->pwrb_handle = NULL;
 
        if (task->sc) {
                spin_lock(&phba->io_sgl_lock);
@@ -3865,6 +3916,7 @@ static int beiscsi_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
                                io_task->pwrb_handle =
                                                beiscsi_conn->plogin_wrb_handle;
                        }
+                       beiscsi_conn->task = task;
                } else {
                        spin_lock(&phba->mgmt_sgl_lock);
                        io_task->psgl_handle = alloc_mgmt_sgl_handle(phba);
@@ -3907,53 +3959,11 @@ free_hndls:
        io_task->pwrb_handle = NULL;
        pci_pool_free(beiscsi_sess->bhs_pool, io_task->cmd_bhs,
                      io_task->bhs_pa.u.a64.address);
+       io_task->cmd_bhs = NULL;
        SE_DEBUG(DBG_LVL_1, "Alloc of SGL_ICD Failed\n");
        return -ENOMEM;
 }
 
-static void beiscsi_cleanup_task(struct iscsi_task *task)
-{
-       struct beiscsi_io_task *io_task = task->dd_data;
-       struct iscsi_conn *conn = task->conn;
-       struct beiscsi_conn *beiscsi_conn = conn->dd_data;
-       struct beiscsi_hba *phba = beiscsi_conn->phba;
-       struct beiscsi_session *beiscsi_sess = beiscsi_conn->beiscsi_sess;
-       struct hwi_wrb_context *pwrb_context;
-       struct hwi_controller *phwi_ctrlr;
-
-       phwi_ctrlr = phba->phwi_ctrlr;
-       pwrb_context = &phwi_ctrlr->wrb_context[beiscsi_conn->beiscsi_conn_cid
-                       - phba->fw_config.iscsi_cid_start];
-       if (io_task->pwrb_handle) {
-               free_wrb_handle(phba, pwrb_context, io_task->pwrb_handle);
-               io_task->pwrb_handle = NULL;
-       }
-
-       if (io_task->cmd_bhs) {
-               pci_pool_free(beiscsi_sess->bhs_pool, io_task->cmd_bhs,
-                             io_task->bhs_pa.u.a64.address);
-       }
-
-       if (task->sc) {
-               if (io_task->psgl_handle) {
-                       spin_lock(&phba->io_sgl_lock);
-                       free_io_sgl_handle(phba, io_task->psgl_handle);
-                       spin_unlock(&phba->io_sgl_lock);
-                       io_task->psgl_handle = NULL;
-               }
-       } else {
-               if (task->hdr &&
-                  ((task->hdr->opcode & ISCSI_OPCODE_MASK) == ISCSI_OP_LOGIN))
-                       return;
-               if (io_task->psgl_handle) {
-                       spin_lock(&phba->mgmt_sgl_lock);
-                       free_mgmt_sgl_handle(phba, io_task->psgl_handle);
-                       spin_unlock(&phba->mgmt_sgl_lock);
-                       io_task->psgl_handle = NULL;
-               }
-       }
-}
-
 static int beiscsi_iotask(struct iscsi_task *task, struct scatterlist *sg,
                          unsigned int num_sg, unsigned int xferlen,
                          unsigned int writedir)
@@ -3993,7 +4003,8 @@ static int beiscsi_iotask(struct iscsi_task *task, struct scatterlist *sg,
               &io_task->cmd_bhs->iscsi_hdr.lun, sizeof(struct scsi_lun));
 
        AMAP_SET_BITS(struct amap_iscsi_wrb, lun, pwrb,
-                     cpu_to_be16(*(unsigned short *)&io_task->cmd_bhs->iscsi_hdr.lun));
+                     cpu_to_be16(*(unsigned short *)
+                                 &io_task->cmd_bhs->iscsi_hdr.lun));
        AMAP_SET_BITS(struct amap_iscsi_wrb, r2t_exp_dtl, pwrb, xferlen);
        AMAP_SET_BITS(struct amap_iscsi_wrb, wrb_idx, pwrb,
                      io_task->pwrb_handle->wrb_index);
@@ -4126,6 +4137,76 @@ static int beiscsi_task_xmit(struct iscsi_task *task)
        return beiscsi_iotask(task, sg, num_sg, xferlen, writedir);
 }
 
+/**
+ * beiscsi_bsg_request - handle bsg request from ISCSI transport
+ * @job: job to handle
+ */
+static int beiscsi_bsg_request(struct bsg_job *job)
+{
+       struct Scsi_Host *shost;
+       struct beiscsi_hba *phba;
+       struct iscsi_bsg_request *bsg_req = job->request;
+       int rc = -EINVAL;
+       unsigned int tag;
+       struct be_dma_mem nonemb_cmd;
+       struct be_cmd_resp_hdr *resp;
+       struct iscsi_bsg_reply *bsg_reply = job->reply;
+       unsigned short status, extd_status;
+
+       shost = iscsi_job_to_shost(job);
+       phba = iscsi_host_priv(shost);
+
+       switch (bsg_req->msgcode) {
+       case ISCSI_BSG_HST_VENDOR:
+               nonemb_cmd.va = pci_alloc_consistent(phba->ctrl.pdev,
+                                       job->request_payload.payload_len,
+                                       &nonemb_cmd.dma);
+               if (nonemb_cmd.va == NULL) {
+                       SE_DEBUG(DBG_LVL_1, "Failed to allocate memory for "
+                                "beiscsi_bsg_request\n");
+                       return -EIO;
+               }
+               tag = mgmt_vendor_specific_fw_cmd(&phba->ctrl, phba, job,
+                                                 &nonemb_cmd);
+               if (!tag) {
+                       SE_DEBUG(DBG_LVL_1, "be_cmd_get_mac_addr Failed\n");
+                       pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
+                                           nonemb_cmd.va, nonemb_cmd.dma);
+                       return -EAGAIN;
+               } else
+                       wait_event_interruptible(phba->ctrl.mcc_wait[tag],
+                                                phba->ctrl.mcc_numtag[tag]);
+               extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
+               status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
+               free_mcc_tag(&phba->ctrl, tag);
+               resp = (struct be_cmd_resp_hdr *)nonemb_cmd.va;
+               sg_copy_from_buffer(job->reply_payload.sg_list,
+                                   job->reply_payload.sg_cnt,
+                                   nonemb_cmd.va, (resp->response_length
+                                   + sizeof(*resp)));
+               bsg_reply->reply_payload_rcv_len = resp->response_length;
+               bsg_reply->result = status;
+               bsg_job_done(job, bsg_reply->result,
+                            bsg_reply->reply_payload_rcv_len);
+               pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
+                                   nonemb_cmd.va, nonemb_cmd.dma);
+               if (status || extd_status) {
+                       SE_DEBUG(DBG_LVL_1, "be_cmd_get_mac_addr Failed"
+                                " status = %d extd_status = %d\n",
+                                status, extd_status);
+                       return -EIO;
+               }
+               break;
+
+       default:
+               SE_DEBUG(DBG_LVL_1, "Unsupported bsg command: 0x%x\n",
+                        bsg_req->msgcode);
+               break;
+       }
+
+       return rc;
+}
+
 static void beiscsi_quiesce(struct beiscsi_hba *phba)
 {
        struct hwi_controller *phwi_ctrlr;
@@ -4183,6 +4264,7 @@ static void beiscsi_remove(struct pci_dev *pcidev)
                return;
        }
 
+       beiscsi_destroy_def_ifaces(phba);
        beiscsi_quiesce(phba);
        iscsi_boot_destroy_kset(phba->boot_kset);
        iscsi_host_remove(phba->shost);
@@ -4267,8 +4349,11 @@ static int __devinit beiscsi_dev_probe(struct pci_dev *pcidev,
        phba->num_cpus = num_cpus;
        SE_DEBUG(DBG_LVL_8, "num_cpus = %d\n", phba->num_cpus);
 
-       if (enable_msix)
+       if (enable_msix) {
                beiscsi_msix_enable(phba);
+               if (!phba->msix_enabled)
+                       phba->num_cpus = 1;
+       }
        ret = be_ctrl_init(phba, pcidev);
        if (ret) {
                shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-"
@@ -4366,8 +4451,9 @@ static int __devinit beiscsi_dev_probe(struct pci_dev *pcidev,
                 * iscsi boot.
                 */
                shost_printk(KERN_ERR, phba->shost, "Could not set up "
-                            "iSCSI boot info.");
+                            "iSCSI boot info.\n");
 
+       beiscsi_create_def_ifaces(phba);
        SE_DEBUG(DBG_LVL_8, "\n\n\n SUCCESS - DRIVER LOADED\n\n\n");
        return 0;
 
@@ -4418,6 +4504,8 @@ struct iscsi_transport beiscsi_iscsi_transport = {
        .bind_conn = beiscsi_conn_bind,
        .destroy_conn = iscsi_conn_teardown,
        .attr_is_visible = be2iscsi_attr_is_visible,
+       .set_iface_param = be2iscsi_iface_set_param,
+       .get_iface_param = be2iscsi_iface_get_param,
        .set_param = beiscsi_set_param,
        .get_conn_param = iscsi_conn_get_param,
        .get_session_param = iscsi_session_get_param,
@@ -4435,6 +4523,7 @@ struct iscsi_transport beiscsi_iscsi_transport = {
        .ep_poll = beiscsi_ep_poll,
        .ep_disconnect = beiscsi_ep_disconnect,
        .session_recovery_timedout = iscsi_session_recovery_timedout,
+       .bsg_request = beiscsi_bsg_request,
 };
 
 static struct pci_driver beiscsi_pci_driver = {
index b4a06d5..40fea6e 100644 (file)
@@ -34,9 +34,9 @@
 
 #include "be.h"
 #define DRV_NAME               "be2iscsi"
-#define BUILD_STR              "4.1.239.0"
-#define BE_NAME                        "ServerEngines BladeEngine2" \
-                               "Linux iSCSI Driver version" BUILD_STR
+#define BUILD_STR              "4.2.162.0"
+#define BE_NAME                        "Emulex OneConnect" \
+                               "Open-iSCSI Driver version" BUILD_STR
 #define DRV_DESC               BE_NAME " " "Driver"
 
 #define BE_VENDOR_ID           0x19A2
@@ -316,6 +316,8 @@ struct beiscsi_hba {
        struct iscsi_endpoint **ep_array;
        struct iscsi_boot_kset *boot_kset;
        struct Scsi_Host *shost;
+       struct iscsi_iface *ipv4_iface;
+       struct iscsi_iface *ipv6_iface;
        struct {
                /**
                 * group together since they are used most frequently
@@ -345,7 +347,7 @@ struct beiscsi_hba {
        struct work_struct work_cqs;    /* The work being queued */
        struct be_ctrl_info ctrl;
        unsigned int generation;
-       unsigned int read_mac_address;
+       unsigned int interface_handle;
        struct mgmt_session_info boot_sess;
        struct invalidate_command_table inv_tbl[128];
 
@@ -525,8 +527,6 @@ struct hwi_async_pdu_context {
 
                unsigned int free_entries;
                unsigned int busy_entries;
-               unsigned int buffer_size;
-               unsigned int num_entries;
 
                struct list_head free_list;
        } async_header;
@@ -543,11 +543,12 @@ struct hwi_async_pdu_context {
 
                unsigned int free_entries;
                unsigned int busy_entries;
-               unsigned int buffer_size;
                struct list_head free_list;
-               unsigned int num_entries;
        } async_data;
 
+       unsigned int buffer_size;
+       unsigned int num_entries;
+
        /**
         * This is a varying size list! Do not add anything
         * after this entry!!
index 44762cf..01bb04c 100644 (file)
  * Costa Mesa, CA 92626
  */
 
+#include <linux/bsg-lib.h>
+#include <scsi/scsi_transport_iscsi.h>
+#include <scsi/scsi_bsg_iscsi.h>
 #include "be_mgmt.h"
 #include "be_iscsi.h"
-#include <scsi/scsi_transport_iscsi.h>
 
-unsigned int beiscsi_get_boot_target(struct beiscsi_hba *phba)
+unsigned int mgmt_get_boot_target(struct beiscsi_hba *phba)
 {
        struct be_ctrl_info *ctrl = &phba->ctrl;
        struct be_mcc_wrb *wrb;
-       struct be_cmd_req_get_mac_addr *req;
+       struct be_cmd_get_boot_target_req *req;
        unsigned int tag = 0;
 
        SE_DEBUG(DBG_LVL_8, "In bescsi_get_boot_target\n");
@@ -42,22 +44,22 @@ unsigned int beiscsi_get_boot_target(struct beiscsi_hba *phba)
        be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0);
        be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI_INI,
                           OPCODE_ISCSI_INI_BOOT_GET_BOOT_TARGET,
-                          sizeof(*req));
+                          sizeof(struct be_cmd_get_boot_target_resp));
 
        be_mcc_notify(phba);
        spin_unlock(&ctrl->mbox_lock);
        return tag;
 }
 
-unsigned int beiscsi_get_session_info(struct beiscsi_hba *phba,
-                                 u32 boot_session_handle,
-                                 struct be_dma_mem *nonemb_cmd)
+unsigned int mgmt_get_session_info(struct beiscsi_hba *phba,
+                                  u32 boot_session_handle,
+                                  struct be_dma_mem *nonemb_cmd)
 {
        struct be_ctrl_info *ctrl = &phba->ctrl;
        struct be_mcc_wrb *wrb;
        unsigned int tag = 0;
-       struct  be_cmd_req_get_session *req;
-       struct be_cmd_resp_get_session *resp;
+       struct  be_cmd_get_session_req *req;
+       struct be_cmd_get_session_resp *resp;
        struct be_sge *sge;
 
        SE_DEBUG(DBG_LVL_8, "In beiscsi_get_session_info\n");
@@ -187,6 +189,72 @@ int mgmt_check_supported_fw(struct be_ctrl_info *ctrl,
        return status;
 }
 
+unsigned int mgmt_vendor_specific_fw_cmd(struct be_ctrl_info *ctrl,
+                                        struct beiscsi_hba *phba,
+                                        struct bsg_job *job,
+                                        struct be_dma_mem *nonemb_cmd)
+{
+       struct be_cmd_resp_hdr *resp;
+       struct be_mcc_wrb *wrb = wrb_from_mccq(phba);
+       struct be_sge *mcc_sge = nonembedded_sgl(wrb);
+       unsigned int tag = 0;
+       struct iscsi_bsg_request *bsg_req = job->request;
+       struct be_bsg_vendor_cmd *req = nonemb_cmd->va;
+       unsigned short region, sector_size, sector, offset;
+
+       nonemb_cmd->size = job->request_payload.payload_len;
+       memset(nonemb_cmd->va, 0, nonemb_cmd->size);
+       resp = nonemb_cmd->va;
+       region =  bsg_req->rqst_data.h_vendor.vendor_cmd[1];
+       sector_size =  bsg_req->rqst_data.h_vendor.vendor_cmd[2];
+       sector =  bsg_req->rqst_data.h_vendor.vendor_cmd[3];
+       offset =  bsg_req->rqst_data.h_vendor.vendor_cmd[4];
+       req->region = region;
+       req->sector = sector;
+       req->offset = offset;
+       spin_lock(&ctrl->mbox_lock);
+       memset(wrb, 0, sizeof(*wrb));
+
+       switch (bsg_req->rqst_data.h_vendor.vendor_cmd[0]) {
+       case BEISCSI_WRITE_FLASH:
+               offset = sector * sector_size + offset;
+               be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI,
+                                  OPCODE_COMMON_WRITE_FLASH, sizeof(*req));
+               sg_copy_to_buffer(job->request_payload.sg_list,
+                                 job->request_payload.sg_cnt,
+                                 nonemb_cmd->va + offset, job->request_len);
+               break;
+       case BEISCSI_READ_FLASH:
+               be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI,
+                          OPCODE_COMMON_READ_FLASH, sizeof(*req));
+               break;
+       default:
+               shost_printk(KERN_WARNING, phba->shost,
+                            "Unsupported cmd = 0x%x\n\n", bsg_req->rqst_data.
+                            h_vendor.vendor_cmd[0]);
+               spin_unlock(&ctrl->mbox_lock);
+               return -ENOSYS;
+       }
+
+       tag = alloc_mcc_tag(phba);
+       if (!tag) {
+               spin_unlock(&ctrl->mbox_lock);
+               return tag;
+       }
+
+       be_wrb_hdr_prepare(wrb, nonemb_cmd->size, false,
+                          job->request_payload.sg_cnt);
+       mcc_sge->pa_hi = cpu_to_le32(upper_32_bits(nonemb_cmd->dma));
+       mcc_sge->pa_lo = cpu_to_le32(nonemb_cmd->dma & 0xFFFFFFFF);
+       mcc_sge->len = cpu_to_le32(nonemb_cmd->size);
+       wrb->tag0 |= tag;
+
+       be_mcc_notify(phba);
+
+       spin_unlock(&ctrl->mbox_lock);
+       return tag;
+}
+
 int mgmt_epfw_cleanup(struct beiscsi_hba *phba, unsigned short chute)
 {
        struct be_ctrl_info *ctrl = &phba->ctrl;
@@ -328,7 +396,6 @@ int mgmt_open_connection(struct beiscsi_hba *phba,
                         struct sockaddr *dst_addr,
                         struct beiscsi_endpoint *beiscsi_ep,
                         struct be_dma_mem *nonemb_cmd)
-
 {
        struct hwi_controller *phwi_ctrlr;
        struct hwi_context_memory *phwi_context;
@@ -374,17 +441,17 @@ int mgmt_open_connection(struct beiscsi_hba *phba,
        if (dst_addr->sa_family == PF_INET) {
                __be32 s_addr = daddr_in->sin_addr.s_addr;
                req->ip_address.ip_type = BE2_IPV4;
-               req->ip_address.ip_address[0] = s_addr & 0x000000ff;
-               req->ip_address.ip_address[1] = (s_addr & 0x0000ff00) >> 8;
-               req->ip_address.ip_address[2] = (s_addr & 0x00ff0000) >> 16;
-               req->ip_address.ip_address[3] = (s_addr & 0xff000000) >> 24;
+               req->ip_address.addr[0] = s_addr & 0x000000ff;
+               req->ip_address.addr[1] = (s_addr & 0x0000ff00) >> 8;
+               req->ip_address.addr[2] = (s_addr & 0x00ff0000) >> 16;
+               req->ip_address.addr[3] = (s_addr & 0xff000000) >> 24;
                req->tcp_port = ntohs(daddr_in->sin_port);
                beiscsi_ep->dst_addr = daddr_in->sin_addr.s_addr;
                beiscsi_ep->dst_tcpport = ntohs(daddr_in->sin_port);
                beiscsi_ep->ip_type = BE2_IPV4;
        } else if (dst_addr->sa_family == PF_INET6) {
                req->ip_address.ip_type = BE2_IPV6;
-               memcpy(&req->ip_address.ip_address,
+               memcpy(&req->ip_address.addr,
                       &daddr_in6->sin6_addr.in6_u.u6_addr8, 16);
                req->tcp_port = ntohs(daddr_in6->sin6_port);
                beiscsi_ep->dst_tcpport = ntohs(daddr_in6->sin6_port);
@@ -419,14 +486,399 @@ int mgmt_open_connection(struct beiscsi_hba *phba,
        return tag;
 }
 
-unsigned int be_cmd_get_mac_addr(struct beiscsi_hba *phba)
+unsigned int mgmt_get_all_if_id(struct beiscsi_hba *phba)
 {
        struct be_ctrl_info *ctrl = &phba->ctrl;
-       struct be_mcc_wrb *wrb;
-       struct be_cmd_req_get_mac_addr *req;
+       struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem);
+       struct be_cmd_get_all_if_id_req *req = embedded_payload(wrb);
+       struct be_cmd_get_all_if_id_req *pbe_allid = req;
+       int status = 0;
+
+       memset(wrb, 0, sizeof(*wrb));
+
+       spin_lock(&ctrl->mbox_lock);
+
+       be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0);
+       be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI,
+                          OPCODE_COMMON_ISCSI_NTWK_GET_ALL_IF_ID,
+                          sizeof(*req));
+       status = be_mbox_notify(ctrl);
+       if (!status)
+               phba->interface_handle = pbe_allid->if_hndl_list[0];
+       else {
+               shost_printk(KERN_WARNING, phba->shost,
+                            "Failed in mgmt_get_all_if_id\n");
+       }
+       spin_unlock(&ctrl->mbox_lock);
+
+       return status;
+}
+
+static int mgmt_exec_nonemb_cmd(struct beiscsi_hba *phba,
+                               struct be_dma_mem *nonemb_cmd, void *resp_buf,
+                               int resp_buf_len)
+{
+       struct be_ctrl_info *ctrl = &phba->ctrl;
+       struct be_mcc_wrb *wrb = wrb_from_mccq(phba);
+       unsigned short status, extd_status;
+       struct be_sge *sge;
+       unsigned int tag;
+       int rc = 0;
+
+       spin_lock(&ctrl->mbox_lock);
+       tag = alloc_mcc_tag(phba);
+       if (!tag) {
+               spin_unlock(&ctrl->mbox_lock);
+               rc = -ENOMEM;
+               goto free_cmd;
+       }
+       memset(wrb, 0, sizeof(*wrb));
+       wrb->tag0 |= tag;
+       sge = nonembedded_sgl(wrb);
+
+       be_wrb_hdr_prepare(wrb, nonemb_cmd->size, false, 1);
+       sge->pa_hi = cpu_to_le32(upper_32_bits(nonemb_cmd->dma));
+       sge->pa_lo = cpu_to_le32(nonemb_cmd->dma & 0xFFFFFFFF);
+       sge->len = cpu_to_le32(nonemb_cmd->size);
+
+       be_mcc_notify(phba);
+       spin_unlock(&ctrl->mbox_lock);
+
+       wait_event_interruptible(phba->ctrl.mcc_wait[tag],
+                                phba->ctrl.mcc_numtag[tag]);
+
+       extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
+       status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
+       if (status || extd_status) {
+               SE_DEBUG(DBG_LVL_1,
+                        "mgmt_exec_nonemb_cmd Failed status = %d"
+                        "extd_status = %d\n", status, extd_status);
+               rc = -EIO;
+               goto free_tag;
+       }
+
+       if (resp_buf)
+               memcpy(resp_buf, nonemb_cmd->va, resp_buf_len);
+
+free_tag:
+       free_mcc_tag(&phba->ctrl, tag);
+free_cmd:
+       pci_free_consistent(ctrl->pdev, nonemb_cmd->size,
+                           nonemb_cmd->va, nonemb_cmd->dma);
+       return rc;
+}
+
+static int mgmt_alloc_cmd_data(struct beiscsi_hba *phba, struct be_dma_mem *cmd,
+                              int iscsi_cmd, int size)
+{
+       cmd->va = pci_alloc_consistent(phba->ctrl.pdev, sizeof(size),
+                                      &cmd->dma);
+       if (!cmd->va) {
+               SE_DEBUG(DBG_LVL_1, "Failed to allocate memory for if info\n");
+               return -ENOMEM;
+       }
+       memset(cmd->va, 0, sizeof(size));
+       cmd->size = size;
+       be_cmd_hdr_prepare(cmd->va, CMD_SUBSYSTEM_ISCSI, iscsi_cmd, size);
+       return 0;
+}
+
+static int
+mgmt_static_ip_modify(struct beiscsi_hba *phba,
+                     struct be_cmd_get_if_info_resp *if_info,
+                     struct iscsi_iface_param_info *ip_param,
+                     struct iscsi_iface_param_info *subnet_param,
+                     uint32_t ip_action)
+{
+       struct be_cmd_set_ip_addr_req *req;
+       struct be_dma_mem nonemb_cmd;
+       uint32_t ip_type;
+       int rc;
+
+       rc = mgmt_alloc_cmd_data(phba, &nonemb_cmd,
+                                OPCODE_COMMON_ISCSI_NTWK_MODIFY_IP_ADDR,
+                                sizeof(*req));
+       if (rc)
+               return rc;
+
+       ip_type = (ip_param->param == ISCSI_NET_PARAM_IPV6_ADDR) ?
+               BE2_IPV6 : BE2_IPV4 ;
+
+       req = nonemb_cmd.va;
+       req->ip_params.record_entry_count = 1;
+       req->ip_params.ip_record.action = ip_action;
+       req->ip_params.ip_record.interface_hndl =
+               phba->interface_handle;
+       req->ip_params.ip_record.ip_addr.size_of_structure =
+               sizeof(struct be_ip_addr_subnet_format);
+       req->ip_params.ip_record.ip_addr.ip_type = ip_type;
+
+       if (ip_action == IP_ACTION_ADD) {
+               memcpy(req->ip_params.ip_record.ip_addr.addr, ip_param->value,
+                      ip_param->len);
+
+               if (subnet_param)
+                       memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
+                              subnet_param->value, subnet_param->len);
+       } else {
+               memcpy(req->ip_params.ip_record.ip_addr.addr,
+                      if_info->ip_addr.addr, ip_param->len);
+
+               memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
+                      if_info->ip_addr.subnet_mask, ip_param->len);
+       }
+
+       rc = mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
+       if (rc < 0)
+               shost_printk(KERN_WARNING, phba->shost,
+                            "Failed to Modify existing IP Address\n");
+       return rc;
+}
+
+static int mgmt_modify_gateway(struct beiscsi_hba *phba, uint8_t *gt_addr,
+                              uint32_t gtway_action, uint32_t param_len)
+{
+       struct be_cmd_set_def_gateway_req *req;
+       struct be_dma_mem nonemb_cmd;
+       int rt_val;
+
+
+       rt_val = mgmt_alloc_cmd_data(phba, &nonemb_cmd,
+                               OPCODE_COMMON_ISCSI_NTWK_MODIFY_DEFAULT_GATEWAY,
+                               sizeof(*req));
+       if (rt_val)
+               return rt_val;
+
+       req = nonemb_cmd.va;
+       req->action = gtway_action;
+       req->ip_addr.ip_type = BE2_IPV4;
+
+       memcpy(req->ip_addr.addr, gt_addr, param_len);
+
+       return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
+}
+
+int mgmt_set_ip(struct beiscsi_hba *phba,
+               struct iscsi_iface_param_info *ip_param,
+               struct iscsi_iface_param_info *subnet_param,
+               uint32_t boot_proto)
+{
+       struct be_cmd_get_def_gateway_resp gtway_addr_set;
+       struct be_cmd_get_if_info_resp if_info;
+       struct be_cmd_set_dhcp_req *dhcpreq;
+       struct be_cmd_rel_dhcp_req *reldhcp;
+       struct be_dma_mem nonemb_cmd;
+       uint8_t *gtway_addr;
+       uint32_t ip_type;
+       int rc;
+
+       if (mgmt_get_all_if_id(phba))
+               return -EIO;
+
+       memset(&if_info, 0, sizeof(if_info));
+       ip_type = (ip_param->param == ISCSI_NET_PARAM_IPV6_ADDR) ?
+               BE2_IPV6 : BE2_IPV4 ;
+
+       rc = mgmt_get_if_info(phba, ip_type, &if_info);
+       if (rc)
+               return rc;
+
+       if (boot_proto == ISCSI_BOOTPROTO_DHCP) {
+               if (if_info.dhcp_state) {
+                       shost_printk(KERN_WARNING, phba->shost,
+                                    "DHCP Already Enabled\n");
+                       return 0;
+               }
+               /* The ip_param->len is 1 in DHCP case. Setting
+                  proper IP len as this it is used while
+                  freeing the Static IP.
+                */
+               ip_param->len = (ip_param->param == ISCSI_NET_PARAM_IPV6_ADDR) ?
+                               IP_V6_LEN : IP_V4_LEN;
+
+       } else {
+               if (if_info.dhcp_state) {
+
+                       memset(&if_info, 0, sizeof(if_info));
+                       rc = mgmt_alloc_cmd_data(phba, &nonemb_cmd,
+                               OPCODE_COMMON_ISCSI_NTWK_REL_STATELESS_IP_ADDR,
+                               sizeof(*reldhcp));
+
+                       if (rc)
+                               return rc;
+
+                       reldhcp = nonemb_cmd.va;
+                       reldhcp->interface_hndl = phba->interface_handle;
+                       reldhcp->ip_type = ip_type;
+
+                       rc = mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
+                       if (rc < 0) {
+                               shost_printk(KERN_WARNING, phba->shost,
+                                            "Failed to Delete existing dhcp\n");
+                               return rc;
+                       }
+               }
+       }
+
+       /* Delete the Static IP Set */
+       if (if_info.ip_addr.addr[0]) {
+               rc = mgmt_static_ip_modify(phba, &if_info, ip_param, NULL,
+                                          IP_ACTION_DEL);
+               if (rc)
+                       return rc;
+       }
+
+       /* Delete the Gateway settings if mode change is to DHCP */
+       if (boot_proto == ISCSI_BOOTPROTO_DHCP) {
+               memset(&gtway_addr_set, 0, sizeof(gtway_addr_set));
+               rc = mgmt_get_gateway(phba, BE2_IPV4, &gtway_addr_set);
+               if (rc) {
+                       shost_printk(KERN_WARNING, phba->shost,
+                                    "Failed to Get Gateway Addr\n");
+                       return rc;
+               }
+
+               if (gtway_addr_set.ip_addr.addr[0]) {
+                       gtway_addr = (uint8_t *)&gtway_addr_set.ip_addr.addr;
+                       rc = mgmt_modify_gateway(phba, gtway_addr,
+                                                IP_ACTION_DEL, IP_V4_LEN);
+
+                       if (rc) {
+                               shost_printk(KERN_WARNING, phba->shost,
+                                            "Failed to clear Gateway Addr Set\n");
+                               return rc;
+                       }
+               }
+       }
+
+       /* Set Adapter to DHCP/Static Mode */
+       if (boot_proto == ISCSI_BOOTPROTO_DHCP) {
+               rc = mgmt_alloc_cmd_data(phba, &nonemb_cmd,
+                       OPCODE_COMMON_ISCSI_NTWK_CONFIG_STATELESS_IP_ADDR,
+                       sizeof(*dhcpreq));
+               if (rc)
+                       return rc;
+
+               dhcpreq = nonemb_cmd.va;
+               dhcpreq->flags = BLOCKING;
+               dhcpreq->retry_count = 1;
+               dhcpreq->interface_hndl = phba->interface_handle;
+               dhcpreq->ip_type = BE2_DHCP_V4;
+
+               return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
+       } else {
+               return mgmt_static_ip_modify(phba, &if_info, ip_param,
+                                            subnet_param, IP_ACTION_ADD);
+       }
+
+       return rc;
+}
+
+int mgmt_set_gateway(struct beiscsi_hba *phba,
+                    struct iscsi_iface_param_info *gateway_param)
+{
+       struct be_cmd_get_def_gateway_resp gtway_addr_set;
+       uint8_t *gtway_addr;
+       int rt_val;
+
+       memset(&gtway_addr_set, 0, sizeof(gtway_addr_set));
+       rt_val = mgmt_get_gateway(phba, BE2_IPV4, &gtway_addr_set);
+       if (rt_val) {
+               shost_printk(KERN_WARNING, phba->shost,
+                            "Failed to Get Gateway Addr\n");
+               return rt_val;
+       }
+
+       if (gtway_addr_set.ip_addr.addr[0]) {
+               gtway_addr = (uint8_t *)&gtway_addr_set.ip_addr.addr;
+               rt_val = mgmt_modify_gateway(phba, gtway_addr, IP_ACTION_DEL,
+                                            gateway_param->len);
+               if (rt_val) {
+                       shost_printk(KERN_WARNING, phba->shost,
+                                    "Failed to clear Gateway Addr Set\n");
+                       return rt_val;
+               }
+       }
+
+       gtway_addr = (uint8_t *)&gateway_param->value;
+       rt_val = mgmt_modify_gateway(phba, gtway_addr, IP_ACTION_ADD,
+                                    gateway_param->len);
+
+       if (rt_val)
+               shost_printk(KERN_WARNING, phba->shost,
+                            "Failed to Set Gateway Addr\n");
+
+       return rt_val;
+}
+
+int mgmt_get_gateway(struct beiscsi_hba *phba, int ip_type,
+                    struct be_cmd_get_def_gateway_resp *gateway)
+{
+       struct be_cmd_get_def_gateway_req *req;
+       struct be_dma_mem nonemb_cmd;
+       int rc;
+
+       rc = mgmt_alloc_cmd_data(phba, &nonemb_cmd,
+                                OPCODE_COMMON_ISCSI_NTWK_GET_DEFAULT_GATEWAY,
+                                sizeof(*gateway));
+       if (rc)
+               return rc;
+
+       req = nonemb_cmd.va;
+       req->ip_type = ip_type;
+
+       return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, gateway,
+                                   sizeof(*gateway));
+}
+
+int mgmt_get_if_info(struct beiscsi_hba *phba, int ip_type,
+                    struct be_cmd_get_if_info_resp *if_info)
+{
+       struct be_cmd_get_if_info_req *req;
+       struct be_dma_mem nonemb_cmd;
+       int rc;
+
+       if (mgmt_get_all_if_id(phba))
+               return -EIO;
+
+       rc = mgmt_alloc_cmd_data(phba, &nonemb_cmd,
+                                OPCODE_COMMON_ISCSI_NTWK_GET_IF_INFO,
+                                sizeof(*if_info));
+       if (rc)
+               return rc;
+
+       req = nonemb_cmd.va;
+       req->interface_hndl = phba->interface_handle;
+       req->ip_type = ip_type;
+
+       return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, if_info,
+                                   sizeof(*if_info));
+}
+
+int mgmt_get_nic_conf(struct beiscsi_hba *phba,
+                     struct be_cmd_get_nic_conf_resp *nic)
+{
+       struct be_dma_mem nonemb_cmd;
+       int rc;
+
+       rc = mgmt_alloc_cmd_data(phba, &nonemb_cmd,
+                                OPCODE_COMMON_ISCSI_NTWK_GET_NIC_CONFIG,
+                                sizeof(*nic));
+       if (rc)
+               return rc;
+
+       return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, nic, sizeof(*nic));
+}
+
+
+
+unsigned int be_cmd_get_initname(struct beiscsi_hba *phba)
+{
        unsigned int tag = 0;
+       struct be_mcc_wrb *wrb;
+       struct be_cmd_hba_name *req;
+       struct be_ctrl_info *ctrl = &phba->ctrl;
 
-       SE_DEBUG(DBG_LVL_8, "In be_cmd_get_mac_addr\n");
        spin_lock(&ctrl->mbox_lock);
        tag = alloc_mcc_tag(phba);
        if (!tag) {
@@ -438,12 +890,38 @@ unsigned int be_cmd_get_mac_addr(struct beiscsi_hba *phba)
        req = embedded_payload(wrb);
        wrb->tag0 |= tag;
        be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0);
-       be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI,
-                          OPCODE_COMMON_ISCSI_NTWK_GET_NIC_CONFIG,
-                          sizeof(*req));
+       be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI_INI,
+                       OPCODE_ISCSI_INI_CFG_GET_HBA_NAME,
+                       sizeof(*req));
 
        be_mcc_notify(phba);
        spin_unlock(&ctrl->mbox_lock);
        return tag;
 }
 
+unsigned int be_cmd_get_port_speed(struct beiscsi_hba *phba)
+{
+       unsigned int tag = 0;
+       struct be_mcc_wrb *wrb;
+       struct be_cmd_ntwk_link_status_req *req;
+       struct be_ctrl_info *ctrl = &phba->ctrl;
+
+       spin_lock(&ctrl->mbox_lock);
+       tag = alloc_mcc_tag(phba);
+       if (!tag) {
+               spin_unlock(&ctrl->mbox_lock);
+               return tag;
+       }
+
+       wrb = wrb_from_mccq(phba);
+       req = embedded_payload(wrb);
+       wrb->tag0 |= tag;
+       be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0);
+       be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
+                       OPCODE_COMMON_NTWK_LINK_STATUS_QUERY,
+                       sizeof(*req));
+
+       be_mcc_notify(phba);
+       spin_unlock(&ctrl->mbox_lock);
+       return tag;
+}
index 0842882..5c2e376 100644 (file)
 #ifndef _BEISCSI_MGMT_
 #define _BEISCSI_MGMT_
 
-#include <linux/types.h>
-#include <linux/list.h>
+#include <scsi/scsi_bsg_iscsi.h>
 #include "be_iscsi.h"
 #include "be_main.h"
 
+#define IP_ACTION_ADD  0x01
+#define IP_ACTION_DEL  0x02
+
+#define IP_V6_LEN      16
+#define IP_V4_LEN      4
+
 /**
  * Pseudo amap definition in which each bit of the actual structure is defined
  * as a byte: used to calculate offset/shift/mask of each field
@@ -98,6 +103,10 @@ unsigned int mgmt_invalidate_icds(struct beiscsi_hba *phba,
                                struct invalidate_command_table *inv_tbl,
                                unsigned int num_invalidate, unsigned int cid,
                                struct be_dma_mem *nonemb_cmd);
+unsigned int mgmt_vendor_specific_fw_cmd(struct be_ctrl_info *ctrl,
+                                        struct beiscsi_hba *phba,
+                                        struct bsg_job *job,
+                                        struct be_dma_mem *nonemb_cmd);
 
 struct iscsi_invalidate_connection_params_in {
        struct be_cmd_req_hdr hdr;
@@ -204,6 +213,13 @@ struct be_mgmt_controller_attributes_resp {
        struct mgmt_controller_attributes params;
 } __packed;
 
+struct be_bsg_vendor_cmd {
+       struct be_cmd_req_hdr hdr;
+       unsigned short region;
+       unsigned short offset;
+       unsigned short sector;
+} __packed;
+
 /* configuration management */
 
 #define GET_MGMT_CONTROLLER_WS(phba)    (phba->pmgmt_ws)
@@ -219,12 +235,15 @@ struct be_mgmt_controller_attributes_resp {
                                /* the CMD_RESPONSE_HEADER  */
 
 #define ISCSI_GET_PDU_TEMPLATE_ADDRESS(pc, pa) {\
-    pa->lo = phba->init_mem[ISCSI_MEM_GLOBAL_HEADER].mem_array[0].\
+       pa->lo = phba->init_mem[ISCSI_MEM_GLOBAL_HEADER].mem_array[0].\
                                        bus_address.u.a32.address_lo;  \
-    pa->hi = phba->init_mem[ISCSI_MEM_GLOBAL_HEADER].mem_array[0].\
+       pa->hi = phba->init_mem[ISCSI_MEM_GLOBAL_HEADER].mem_array[0].\
                                        bus_address.u.a32.address_hi;  \
 }
 
+#define BEISCSI_WRITE_FLASH 0
+#define BEISCSI_READ_FLASH 1
+
 struct beiscsi_endpoint {
        struct beiscsi_hba *phba;
        struct beiscsi_sess *sess;
@@ -248,4 +267,27 @@ unsigned int mgmt_invalidate_connection(struct beiscsi_hba *phba,
                                         unsigned short issue_reset,
                                         unsigned short savecfg_flag);
 
+int mgmt_set_ip(struct beiscsi_hba *phba,
+               struct iscsi_iface_param_info *ip_param,
+               struct iscsi_iface_param_info *subnet_param,
+               uint32_t boot_proto);
+
+unsigned int mgmt_get_boot_target(struct beiscsi_hba *phba);
+
+unsigned int mgmt_get_session_info(struct beiscsi_hba *phba,
+                                  u32 boot_session_handle,
+                                  struct be_dma_mem *nonemb_cmd);
+
+int mgmt_get_nic_conf(struct beiscsi_hba *phba,
+                     struct be_cmd_get_nic_conf_resp *mac);
+
+int mgmt_get_if_info(struct beiscsi_hba *phba, int ip_type,
+                    struct be_cmd_get_if_info_resp *if_info);
+
+int mgmt_get_gateway(struct beiscsi_hba *phba, int ip_type,
+                    struct be_cmd_get_def_gateway_resp *gateway);
+
+int mgmt_set_gateway(struct beiscsi_hba *phba,
+                    struct iscsi_iface_param_info *gateway_param);
+
 #endif
index e75e07d..51c9e13 100644 (file)
@@ -799,9 +799,6 @@ struct bfad_port_s *bfa_fcb_lport_new(struct bfad_s *bfad,
                                      enum bfa_lport_role roles,
                                      struct bfad_vf_s *vf_drv,
                                      struct bfad_vport_s *vp_drv);
-void bfa_fcb_lport_delete(struct bfad_s *bfad, enum bfa_lport_role roles,
-                         struct bfad_vf_s *vf_drv,
-                         struct bfad_vport_s *vp_drv);
 
 /*
  * vport callbacks
index 5d2a130..937000d 100644 (file)
@@ -616,7 +616,7 @@ bfa_fcs_lport_online_actions(struct bfa_fcs_lport_s *port)
        __port_action[port->fabric->fab_type].online(port);
 
        wwn2str(lpwwn_buf, bfa_fcs_lport_get_pwwn(port));
-       BFA_LOG(KERN_INFO, bfad, bfa_log_level,
+       BFA_LOG(KERN_WARNING, bfad, bfa_log_level,
                "Logical port online: WWN = %s Role = %s\n",
                lpwwn_buf, "Initiator");
        bfa_fcs_lport_aen_post(port, BFA_LPORT_AEN_ONLINE);
@@ -639,12 +639,12 @@ bfa_fcs_lport_offline_actions(struct bfa_fcs_lport_s *port)
        wwn2str(lpwwn_buf, bfa_fcs_lport_get_pwwn(port));
        if (bfa_sm_cmp_state(port->fabric,
                        bfa_fcs_fabric_sm_online) == BFA_TRUE) {
-               BFA_LOG(KERN_ERR, bfad, bfa_log_level,
+               BFA_LOG(KERN_WARNING, bfad, bfa_log_level,
                "Logical port lost fabric connectivity: WWN = %s Role = %s\n",
                lpwwn_buf, "Initiator");
                bfa_fcs_lport_aen_post(port, BFA_LPORT_AEN_DISCONNECT);
        } else {
-               BFA_LOG(KERN_INFO, bfad, bfa_log_level,
+               BFA_LOG(KERN_WARNING, bfad, bfa_log_level,
                "Logical port taken offline: WWN = %s Role = %s\n",
                lpwwn_buf, "Initiator");
                bfa_fcs_lport_aen_post(port, BFA_LPORT_AEN_OFFLINE);
@@ -709,14 +709,10 @@ bfa_fcs_lport_deleted(struct bfa_fcs_lport_s *port)
        bfa_fcs_lport_aen_post(port, BFA_LPORT_AEN_DELETE);
 
        /* Base port will be deleted by the OS driver */
-       if (port->vport) {
-               bfa_fcb_lport_delete(port->fcs->bfad, port->port_cfg.roles,
-                               port->fabric->vf_drv,
-                               port->vport ? port->vport->vport_drv : NULL);
+       if (port->vport)
                bfa_fcs_vport_delete_comp(port->vport);
-       } else {
+       else
                bfa_wc_down(&port->fabric->wc);
-       }
 }
 
 
@@ -5714,17 +5710,23 @@ bfa_fcs_vport_free(struct bfa_fcs_vport_s *vport)
                        (struct bfad_vport_s *)vport->vport_drv;
 
        bfa_fcs_fabric_delvport(__vport_fabric(vport), vport);
+       bfa_lps_delete(vport->lps);
 
-       if (vport_drv->comp_del)
+       if (vport_drv->comp_del) {
                complete(vport_drv->comp_del);
-       else
-               kfree(vport_drv);
+               return;
+       }
 
-       bfa_lps_delete(vport->lps);
+       /*
+        * We queue the vport delete work to the IM work_q from here.
+        * The memory for the bfad_vport_s is freed from the FC function
+        * template vport_delete entry point.
+        */
+       if (vport_drv)
+               bfad_im_port_delete(vport_drv->drv_port.bfad,
+                               &vport_drv->drv_port);
 }
 
-
-
 /*
  *  fcs_vport_public FCS virtual port public interfaces
  */
index 404fd10..2e4b0be 100644 (file)
@@ -456,23 +456,6 @@ bfa_fcb_lport_new(struct bfad_s *bfad, struct bfa_fcs_lport_s *port,
        return port_drv;
 }
 
-void
-bfa_fcb_lport_delete(struct bfad_s *bfad, enum bfa_lport_role roles,
-                   struct bfad_vf_s *vf_drv, struct bfad_vport_s *vp_drv)
-{
-       struct bfad_port_s    *port_drv;
-
-       /* this will be only called from rmmod context */
-       if (vp_drv && !vp_drv->comp_del) {
-               port_drv = (vp_drv) ? (&(vp_drv)->drv_port) :
-                               ((vf_drv) ? (&(vf_drv)->base_port) :
-                               (&(bfad)->pport));
-               bfa_trc(bfad, roles);
-               if (roles & BFA_LPORT_ROLE_FCP_IM)
-                       bfad_im_port_delete(bfad, port_drv);
-       }
-}
-
 /*
  * FCS RPORT alloc callback, after successful PLOGI by FCS
  */
index 7b1ecd2..8b6c6bf 100644 (file)
@@ -497,6 +497,7 @@ bfad_im_vport_delete(struct fc_vport *fc_vport)
        if (im_port->flags & BFAD_PORT_DELETE) {
                bfad_scsi_host_free(bfad, im_port);
                list_del(&vport->list_entry);
+               kfree(vport);
                return 0;
        }
 
@@ -758,25 +759,10 @@ bfad_im_model_desc_show(struct device *dev, struct device_attribute *attr,
        else if (!strcmp(model, "Brocade-804"))
                snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
                        "Brocade 8Gbps FC HBA for HP Bladesystem C-class");
-       else if (!strcmp(model, "Brocade-902") ||
-                !strcmp(model, "Brocade-1741"))
+       else if (!strcmp(model, "Brocade-1741"))
                snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
                        "Brocade 10Gbps CNA for Dell M-Series Blade Servers");
-       else if (strstr(model, "Brocade-1560")) {
-               if (nports == 1)
-                       snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
-                               "Brocade 16Gbps PCIe single port FC HBA");
-               else
-                       snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
-                               "Brocade 16Gbps PCIe dual port FC HBA");
-       } else if (strstr(model, "Brocade-1710")) {
-               if (nports == 1)
-                       snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
-                               "Brocade 10Gbps single port CNA");
-               else
-                       snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
-                               "Brocade 10Gbps dual port CNA");
-       } else if (strstr(model, "Brocade-1860")) {
+       else if (strstr(model, "Brocade-1860")) {
                if (nports == 1 && bfa_ioc_is_cna(&bfad->bfa.ioc))
                        snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
                                "Brocade 10Gbps single port CNA");
index 495a841..25093a0 100644 (file)
@@ -1,6 +1,6 @@
 /* 57xx_iscsi_constants.h: Broadcom NetXtreme II iSCSI HSI
  *
- * Copyright (c) 2006 - 2011 Broadcom Corporation
+ * Copyright (c) 2006 - 2012 Broadcom Corporation
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
index 72118db..dc0a08e 100644 (file)
@@ -1,6 +1,6 @@
 /* 57xx_iscsi_hsi.h: Broadcom NetXtreme II iSCSI HSI.
  *
- * Copyright (c) 2006 - 2011 Broadcom Corporation
+ * Copyright (c) 2006 - 2012 Broadcom Corporation
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
index 0bd70e8..0c53c28 100644 (file)
@@ -1,6 +1,6 @@
 /* bnx2i.h: Broadcom NetXtreme II iSCSI driver.
  *
- * Copyright (c) 2006 - 2011 Broadcom Corporation
+ * Copyright (c) 2006 - 2012 Broadcom Corporation
  * Copyright (c) 2007, 2008 Red Hat, Inc.  All rights reserved.
  * Copyright (c) 2007, 2008 Mike Christie
  *
index f9d6f41..ece47e5 100644 (file)
@@ -1,6 +1,6 @@
 /* bnx2i_hwi.c: Broadcom NetXtreme II iSCSI driver.
  *
- * Copyright (c) 2006 - 2011 Broadcom Corporation
+ * Copyright (c) 2006 - 2012 Broadcom Corporation
  * Copyright (c) 2007, 2008 Red Hat, Inc.  All rights reserved.
  * Copyright (c) 2007, 2008 Mike Christie
  *
index 4927cca..8b68167 100644 (file)
@@ -1,6 +1,6 @@
 /* bnx2i.c: Broadcom NetXtreme II iSCSI driver.
  *
- * Copyright (c) 2006 - 2011 Broadcom Corporation
+ * Copyright (c) 2006 - 2012 Broadcom Corporation
  * Copyright (c) 2007, 2008 Red Hat, Inc.  All rights reserved.
  * Copyright (c) 2007, 2008 Mike Christie
  *
@@ -18,8 +18,8 @@ static struct list_head adapter_list = LIST_HEAD_INIT(adapter_list);
 static u32 adapter_count;
 
 #define DRV_MODULE_NAME                "bnx2i"
-#define DRV_MODULE_VERSION     "2.7.0.3"
-#define DRV_MODULE_RELDATE     "Jun 15, 2011"
+#define DRV_MODULE_VERSION     "2.7.2.2"
+#define DRV_MODULE_RELDATE     "Apr 25, 2012"
 
 static char version[] __devinitdata =
                "Broadcom NetXtreme II iSCSI Driver " DRV_MODULE_NAME \
index 1a44b45..f8d516b 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * bnx2i_iscsi.c: Broadcom NetXtreme II iSCSI driver.
  *
- * Copyright (c) 2006 - 2011 Broadcom Corporation
+ * Copyright (c) 2006 - 2012 Broadcom Corporation
  * Copyright (c) 2007, 2008 Red Hat, Inc.  All rights reserved.
  * Copyright (c) 2007, 2008 Mike Christie
  *
@@ -2244,6 +2244,7 @@ static struct scsi_host_template bnx2i_host_template = {
        .eh_device_reset_handler = iscsi_eh_device_reset,
        .eh_target_reset_handler = iscsi_eh_recover_target,
        .change_queue_depth     = iscsi_change_queue_depth,
+       .target_alloc           = iscsi_target_alloc,
        .can_queue              = 2048,
        .max_sectors            = 127,
        .cmd_per_lun            = 128,
index 83a77f7..c61cf7a 100644 (file)
@@ -1,6 +1,6 @@
 /* bnx2i_sysfs.c: Broadcom NetXtreme II iSCSI driver.
  *
- * Copyright (c) 2004 - 2011 Broadcom Corporation
+ * Copyright (c) 2004 - 2012 Broadcom Corporation
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
index 04c5cea..fda9cde 100644 (file)
 #define ALUA_FAILOVER_TIMEOUT          (60 * HZ)
 #define ALUA_FAILOVER_RETRIES          5
 
+/* flags passed from user level */
+#define ALUA_OPTIMIZE_STPG             1
+
 struct alua_dh_data {
        int                     group_id;
        int                     rel_port;
        int                     tpgs;
        int                     state;
+       int                     pref;
+       unsigned                flags; /* used for optimizing STPG */
        unsigned char           inq[ALUA_INQUIRY_SIZE];
        unsigned char           *buff;
        int                     bufflen;
@@ -554,14 +559,16 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_dh_data *h)
        for (k = 4, ucp = h->buff + 4; k < len; k += off, ucp += off) {
                if (h->group_id == (ucp[2] << 8) + ucp[3]) {
                        h->state = ucp[0] & 0x0f;
+                       h->pref = ucp[0] >> 7;
                        valid_states = ucp[1];
                }
                off = 8 + (ucp[7] * 4);
        }
 
        sdev_printk(KERN_INFO, sdev,
-                   "%s: port group %02x state %c supports %c%c%c%c%c%c%c\n",
+                   "%s: port group %02x state %c %s supports %c%c%c%c%c%c%c\n",
                    ALUA_DH_NAME, h->group_id, print_alua_state(h->state),
+                   h->pref ? "preferred" : "non-preferred",
                    valid_states&TPGS_SUPPORT_TRANSITION?'T':'t',
                    valid_states&TPGS_SUPPORT_OFFLINE?'O':'o',
                    valid_states&TPGS_SUPPORT_LBA_DEPENDENT?'L':'l',
@@ -621,6 +628,37 @@ static int alua_initialize(struct scsi_device *sdev, struct alua_dh_data *h)
 out:
        return err;
 }
+/*
+ * alua_set_params - set/unset the optimize flag
+ * @sdev: device on the path to be activated
+ * params - parameters in the following format
+ *      "no_of_params\0param1\0param2\0param3\0...\0"
+ * For example, to set the flag pass the following parameters
+ * from multipath.conf
+ *     hardware_handler        "2 alua 1"
+ */
+static int alua_set_params(struct scsi_device *sdev, const char *params)
+{
+       struct alua_dh_data *h = get_alua_data(sdev);
+       unsigned int optimize = 0, argc;
+       const char *p = params;
+       int result = SCSI_DH_OK;
+
+       if ((sscanf(params, "%u", &argc) != 1) || (argc != 1))
+               return -EINVAL;
+
+       while (*p++)
+               ;
+       if ((sscanf(p, "%u", &optimize) != 1) || (optimize > 1))
+               return -EINVAL;
+
+       if (optimize)
+               h->flags |= ALUA_OPTIMIZE_STPG;
+       else
+               h->flags &= ~ALUA_OPTIMIZE_STPG;
+
+       return result;
+}
 
 /*
  * alua_activate - activate a path
@@ -637,14 +675,37 @@ static int alua_activate(struct scsi_device *sdev,
 {
        struct alua_dh_data *h = get_alua_data(sdev);
        int err = SCSI_DH_OK;
+       int stpg = 0;
 
        err = alua_rtpg(sdev, h);
        if (err != SCSI_DH_OK)
                goto out;
 
-       if (h->tpgs & TPGS_MODE_EXPLICIT &&
-           h->state != TPGS_STATE_OPTIMIZED &&
-           h->state != TPGS_STATE_LBA_DEPENDENT) {
+       if (h->tpgs & TPGS_MODE_EXPLICIT) {
+               switch (h->state) {
+               case TPGS_STATE_NONOPTIMIZED:
+                       stpg = 1;
+                       if ((h->flags & ALUA_OPTIMIZE_STPG) &&
+                           (!h->pref) &&
+                           (h->tpgs & TPGS_MODE_IMPLICIT))
+                               stpg = 0;
+                       break;
+               case TPGS_STATE_STANDBY:
+                       stpg = 1;
+                       break;
+               case TPGS_STATE_UNAVAILABLE:
+               case TPGS_STATE_OFFLINE:
+                       err = SCSI_DH_IO;
+                       break;
+               case TPGS_STATE_TRANSITIONING:
+                       err = SCSI_DH_RETRY;
+                       break;
+               default:
+                       break;
+               }
+       }
+
+       if (stpg) {
                h->callback_fn = fn;
                h->callback_data = data;
                err = submit_stpg(h);
@@ -698,6 +759,7 @@ static struct scsi_device_handler alua_dh = {
        .prep_fn = alua_prep_fn,
        .check_sense = alua_check_sense,
        .activate = alua_activate,
+       .set_params = alua_set_params,
        .match = alua_match,
 };
 
index 335e851..76e3d0b 100644 (file)
@@ -411,20 +411,18 @@ out:
 }
 
 /**
- * fcoe_interface_cleanup() - Clean up a FCoE interface
+ * fcoe_interface_remove() - remove FCoE interface from netdev
  * @fcoe: The FCoE interface to be cleaned up
  *
  * Caller must be holding the RTNL mutex
  */
-static void fcoe_interface_cleanup(struct fcoe_interface *fcoe)
+static void fcoe_interface_remove(struct fcoe_interface *fcoe)
 {
        struct net_device *netdev = fcoe->netdev;
        struct fcoe_ctlr *fip = &fcoe->ctlr;
        u8 flogi_maddr[ETH_ALEN];
        const struct net_device_ops *ops;
 
-       rtnl_lock();
-
        /*
         * Don't listen for Ethernet packets anymore.
         * synchronize_net() ensures that the packet handlers are not running
@@ -453,12 +451,28 @@ static void fcoe_interface_cleanup(struct fcoe_interface *fcoe)
                        FCOE_NETDEV_DBG(netdev, "Failed to disable FCoE"
                                        " specific feature for LLD.\n");
        }
+       fcoe->removed = 1;
+}
+
+
+/**
+ * fcoe_interface_cleanup() - Clean up a FCoE interface
+ * @fcoe: The FCoE interface to be cleaned up
+ */
+static void fcoe_interface_cleanup(struct fcoe_interface *fcoe)
+{
+       struct net_device *netdev = fcoe->netdev;
+       struct fcoe_ctlr *fip = &fcoe->ctlr;
 
+       rtnl_lock();
+       if (!fcoe->removed)
+               fcoe_interface_remove(fcoe);
        rtnl_unlock();
 
        /* Release the self-reference taken during fcoe_interface_create() */
        /* tear-down the FCoE controller */
        fcoe_ctlr_destroy(fip);
+       scsi_host_put(fcoe->ctlr.lp->host);
        kfree(fcoe);
        dev_put(netdev);
        module_put(THIS_MODULE);
@@ -522,13 +536,11 @@ static void fcoe_update_src_mac(struct fc_lport *lport, u8 *addr)
        struct fcoe_port *port = lport_priv(lport);
        struct fcoe_interface *fcoe = port->priv;
 
-       rtnl_lock();
        if (!is_zero_ether_addr(port->data_src_addr))
                dev_uc_del(fcoe->netdev, port->data_src_addr);
        if (!is_zero_ether_addr(addr))
                dev_uc_add(fcoe->netdev, addr);
        memcpy(port->data_src_addr, addr, ETH_ALEN);
-       rtnl_unlock();
 }
 
 /**
@@ -941,6 +953,10 @@ static void fcoe_if_destroy(struct fc_lport *lport)
        rtnl_lock();
        if (!is_zero_ether_addr(port->data_src_addr))
                dev_uc_del(netdev, port->data_src_addr);
+       if (lport->vport)
+               synchronize_net();
+       else
+               fcoe_interface_remove(fcoe);
        rtnl_unlock();
 
        /* Free queued packets for the per-CPU receive threads */
@@ -959,8 +975,12 @@ static void fcoe_if_destroy(struct fc_lport *lport)
        /* Free memory used by statistical counters */
        fc_lport_free_stats(lport);
 
-       /* Release the Scsi_Host */
-       scsi_host_put(lport->host);
+       /*
+        * Release the Scsi_Host for vport but hold on to
+        * master lport until it fcoe interface fully cleaned-up.
+        */
+       if (lport->vport)
+               scsi_host_put(lport->host);
 }
 
 /**
@@ -2274,10 +2294,9 @@ static void fcoe_percpu_clean(struct fc_lport *lport)
                        continue;
 
                skb = dev_alloc_skb(0);
-               if (!skb) {
-                       spin_unlock_bh(&pp->fcoe_rx_list.lock);
+               if (!skb)
                        continue;
-               }
+
                skb->destructor = fcoe_percpu_flush_done;
 
                spin_lock_bh(&pp->fcoe_rx_list.lock);
index 3c2733a..96ac938 100644 (file)
@@ -71,7 +71,8 @@ do {                                                                  \
  * @ctlr:            The FCoE controller (for FIP)
  * @oem:             The offload exchange manager for all local port
  *                   instances associated with this port
- * This structure is 1:1 with a net devive.
+ * @removed:         Indicates fcoe interface removed from net device
+ * This structure is 1:1 with a net device.
  */
 struct fcoe_interface {
        struct list_head   list;
@@ -81,6 +82,7 @@ struct fcoe_interface {
        struct packet_type fip_packet_type;
        struct fcoe_ctlr   ctlr;
        struct fc_exch_mgr *oem;
+       u8      removed;
 };
 
 #define fcoe_from_ctlr(fip) container_of(fip, struct fcoe_interface, ctlr)
index 249a106..5a4c725 100644 (file)
@@ -1883,7 +1883,13 @@ static void fcoe_ctlr_vn_send(struct fcoe_ctlr *fip,
        frame = (struct fip_frame *)skb->data;
        memset(frame, 0, len);
        memcpy(frame->eth.h_dest, dest, ETH_ALEN);
-       memcpy(frame->eth.h_source, fip->ctl_src_addr, ETH_ALEN);
+
+       if (sub == FIP_SC_VN_BEACON) {
+               hton24(frame->eth.h_source, FIP_VN_FC_MAP);
+               hton24(frame->eth.h_source + 3, fip->port_id);
+       } else {
+               memcpy(frame->eth.h_source, fip->ctl_src_addr, ETH_ALEN);
+       }
        frame->eth.h_proto = htons(ETH_P_FIP);
 
        frame->fip.fip_ver = FIP_VER_ENCAPS(FIP_VER);
index 500e20d..796482b 100644 (file)
@@ -159,6 +159,7 @@ static int hpsa_change_queue_depth(struct scsi_device *sdev,
        int qdepth, int reason);
 
 static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd);
+static int hpsa_eh_abort_handler(struct scsi_cmnd *scsicmd);
 static int hpsa_slave_alloc(struct scsi_device *sdev);
 static void hpsa_slave_destroy(struct scsi_device *sdev);
 
@@ -171,7 +172,7 @@ static void check_ioctl_unit_attention(struct ctlr_info *h,
 static void calc_bucket_map(int *bucket, int num_buckets,
        int nsgs, int *bucket_map);
 static __devinit void hpsa_put_ctlr_into_performant_mode(struct ctlr_info *h);
-static inline u32 next_command(struct ctlr_info *h);
+static inline u32 next_command(struct ctlr_info *h, u8 q);
 static int __devinit hpsa_find_cfg_addrs(struct pci_dev *pdev,
        void __iomem *vaddr, u32 *cfg_base_addr, u64 *cfg_base_addr_index,
        u64 *cfg_offset);
@@ -180,6 +181,7 @@ static int __devinit hpsa_pci_find_memory_BAR(struct pci_dev *pdev,
 static int __devinit hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id);
 static int __devinit hpsa_wait_for_board_state(struct pci_dev *pdev,
        void __iomem *vaddr, int wait_for_ready);
+static inline void finish_cmd(struct CommandList *c);
 #define BOARD_NOT_READY 0
 #define BOARD_READY 1
 
@@ -234,6 +236,16 @@ static int check_for_unit_attention(struct ctlr_info *h,
        return 1;
 }
 
+static int check_for_busy(struct ctlr_info *h, struct CommandList *c)
+{
+       if (c->err_info->CommandStatus != CMD_TARGET_STATUS ||
+               (c->err_info->ScsiStatus != SAM_STAT_BUSY &&
+                c->err_info->ScsiStatus != SAM_STAT_TASK_SET_FULL))
+               return 0;
+       dev_warn(&h->pdev->dev, HPSA "device busy");
+       return 1;
+}
+
 static ssize_t host_store_rescan(struct device *dev,
                                 struct device_attribute *attr,
                                 const char *buf, size_t count)
@@ -368,7 +380,7 @@ static inline int is_logical_dev_addr_mode(unsigned char scsi3addr[])
 }
 
 static const char *raid_label[] = { "0", "4", "1(1+0)", "5", "5+1", "ADG",
-       "UNKNOWN"
+       "1(ADM)", "UNKNOWN"
 };
 #define RAID_UNKNOWN (ARRAY_SIZE(raid_label) - 1)
 
@@ -497,6 +509,7 @@ static struct scsi_host_template hpsa_driver_template = {
        .change_queue_depth     = hpsa_change_queue_depth,
        .this_id                = -1,
        .use_clustering         = ENABLE_CLUSTERING,
+       .eh_abort_handler       = hpsa_eh_abort_handler,
        .eh_device_reset_handler = hpsa_eh_device_reset_handler,
        .ioctl                  = hpsa_ioctl,
        .slave_alloc            = hpsa_slave_alloc,
@@ -516,24 +529,28 @@ static inline void addQ(struct list_head *list, struct CommandList *c)
        list_add_tail(&c->list, list);
 }
 
-static inline u32 next_command(struct ctlr_info *h)
+static inline u32 next_command(struct ctlr_info *h, u8 q)
 {
        u32 a;
+       struct reply_pool *rq = &h->reply_queue[q];
+       unsigned long flags;
 
        if (unlikely(!(h->transMethod & CFGTBL_Trans_Performant)))
-               return h->access.command_completed(h);
+               return h->access.command_completed(h, q);
 
-       if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) {
-               a = *(h->reply_pool_head); /* Next cmd in ring buffer */
-               (h->reply_pool_head)++;
+       if ((rq->head[rq->current_entry] & 1) == rq->wraparound) {
+               a = rq->head[rq->current_entry];
+               rq->current_entry++;
+               spin_lock_irqsave(&h->lock, flags);
                h->commands_outstanding--;
+               spin_unlock_irqrestore(&h->lock, flags);
        } else {
                a = FIFO_EMPTY;
        }
        /* Check for wraparound */
-       if (h->reply_pool_head == (h->reply_pool + h->max_commands)) {
-               h->reply_pool_head = h->reply_pool;
-               h->reply_pool_wraparound ^= 1;
+       if (rq->current_entry == h->max_commands) {
+               rq->current_entry = 0;
+               rq->wraparound ^= 1;
        }
        return a;
 }
@@ -544,8 +561,41 @@ static inline u32 next_command(struct ctlr_info *h)
  */
 static void set_performant_mode(struct ctlr_info *h, struct CommandList *c)
 {
-       if (likely(h->transMethod & CFGTBL_Trans_Performant))
+       if (likely(h->transMethod & CFGTBL_Trans_Performant)) {
                c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1);
+               if (likely(h->msix_vector))
+                       c->Header.ReplyQueue =
+                               smp_processor_id() % h->nreply_queues;
+       }
+}
+
+static int is_firmware_flash_cmd(u8 *cdb)
+{
+       return cdb[0] == BMIC_WRITE && cdb[6] == BMIC_FLASH_FIRMWARE;
+}
+
+/*
+ * During firmware flash, the heartbeat register may not update as frequently
+ * as it should.  So we dial down lockup detection during firmware flash. and
+ * dial it back up when firmware flash completes.
+ */
+#define HEARTBEAT_SAMPLE_INTERVAL_DURING_FLASH (240 * HZ)
+#define HEARTBEAT_SAMPLE_INTERVAL (30 * HZ)
+static void dial_down_lockup_detection_during_fw_flash(struct ctlr_info *h,
+               struct CommandList *c)
+{
+       if (!is_firmware_flash_cmd(c->Request.CDB))
+               return;
+       atomic_inc(&h->firmware_flash_in_progress);
+       h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL_DURING_FLASH;
+}
+
+static void dial_up_lockup_detection_on_fw_flash_complete(struct ctlr_info *h,
+               struct CommandList *c)
+{
+       if (is_firmware_flash_cmd(c->Request.CDB) &&
+               atomic_dec_and_test(&h->firmware_flash_in_progress))
+               h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
 }
 
 static void enqueue_cmd_and_start_io(struct ctlr_info *h,
@@ -554,11 +604,12 @@ static void enqueue_cmd_and_start_io(struct ctlr_info *h,
        unsigned long flags;
 
        set_performant_mode(h, c);
+       dial_down_lockup_detection_during_fw_flash(h, c);
        spin_lock_irqsave(&h->lock, flags);
        addQ(&h->reqQ, c);
        h->Qdepth++;
-       start_io(h);
        spin_unlock_irqrestore(&h->lock, flags);
+       start_io(h);
 }
 
 static inline void removeQ(struct CommandList *c)
@@ -1193,7 +1244,7 @@ static void complete_scsi_command(struct CommandList *cp)
                                break;
                        }
                        /* Must be some other type of check condition */
-                       dev_warn(&h->pdev->dev, "cp %p has check condition: "
+                       dev_dbg(&h->pdev->dev, "cp %p has check condition: "
                                        "unknown type: "
                                        "Sense: 0x%x, ASC: 0x%x, ASCQ: 0x%x, "
                                        "Returning result: 0x%x, "
@@ -1370,16 +1421,24 @@ static void hpsa_scsi_do_simple_cmd_core_if_no_lockup(struct ctlr_info *h,
        }
 }
 
+#define MAX_DRIVER_CMD_RETRIES 25
 static void hpsa_scsi_do_simple_cmd_with_retry(struct ctlr_info *h,
        struct CommandList *c, int data_direction)
 {
-       int retry_count = 0;
+       int backoff_time = 10, retry_count = 0;
 
        do {
                memset(c->err_info, 0, sizeof(*c->err_info));
                hpsa_scsi_do_simple_cmd_core(h, c);
                retry_count++;
-       } while (check_for_unit_attention(h, c) && retry_count <= 3);
+               if (retry_count > 3) {
+                       msleep(backoff_time);
+                       if (backoff_time < 1000)
+                               backoff_time *= 2;
+               }
+       } while ((check_for_unit_attention(h, c) ||
+                       check_for_busy(h, c)) &&
+                       retry_count <= MAX_DRIVER_CMD_RETRIES);
        hpsa_pci_unmap(h->pdev, c, 1, data_direction);
 }
 
@@ -2065,9 +2124,8 @@ static int hpsa_scsi_queue_command_lck(struct scsi_cmnd *cmd,
                done(cmd);
                return 0;
        }
-       /* Need a lock as this is being allocated from the pool */
-       c = cmd_alloc(h);
        spin_unlock_irqrestore(&h->lock, flags);
+       c = cmd_alloc(h);
        if (c == NULL) {                        /* trouble... */
                dev_err(&h->pdev->dev, "cmd_alloc returned NULL!\n");
                return SCSI_MLQUEUE_HOST_BUSY;
@@ -2334,6 +2392,261 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
        return FAILED;
 }
 
+static void swizzle_abort_tag(u8 *tag)
+{
+       u8 original_tag[8];
+
+       memcpy(original_tag, tag, 8);
+       tag[0] = original_tag[3];
+       tag[1] = original_tag[2];
+       tag[2] = original_tag[1];
+       tag[3] = original_tag[0];
+       tag[4] = original_tag[7];
+       tag[5] = original_tag[6];
+       tag[6] = original_tag[5];
+       tag[7] = original_tag[4];
+}
+
+static int hpsa_send_abort(struct ctlr_info *h, unsigned char *scsi3addr,
+       struct CommandList *abort, int swizzle)
+{
+       int rc = IO_OK;
+       struct CommandList *c;
+       struct ErrorInfo *ei;
+
+       c = cmd_special_alloc(h);
+       if (c == NULL) {        /* trouble... */
+               dev_warn(&h->pdev->dev, "cmd_special_alloc returned NULL!\n");
+               return -ENOMEM;
+       }
+
+       fill_cmd(c, HPSA_ABORT_MSG, h, abort, 0, 0, scsi3addr, TYPE_MSG);
+       if (swizzle)
+               swizzle_abort_tag(&c->Request.CDB[4]);
+       hpsa_scsi_do_simple_cmd_core(h, c);
+       dev_dbg(&h->pdev->dev, "%s: Tag:0x%08x:%08x: do_simple_cmd_core completed.\n",
+               __func__, abort->Header.Tag.upper, abort->Header.Tag.lower);
+       /* no unmap needed here because no data xfer. */
+
+       ei = c->err_info;
+       switch (ei->CommandStatus) {
+       case CMD_SUCCESS:
+               break;
+       case CMD_UNABORTABLE: /* Very common, don't make noise. */
+               rc = -1;
+               break;
+       default:
+               dev_dbg(&h->pdev->dev, "%s: Tag:0x%08x:%08x: interpreting error.\n",
+                       __func__, abort->Header.Tag.upper,
+                       abort->Header.Tag.lower);
+               hpsa_scsi_interpret_error(c);
+               rc = -1;
+               break;
+       }
+       cmd_special_free(h, c);
+       dev_dbg(&h->pdev->dev, "%s: Tag:0x%08x:%08x: Finished.\n", __func__,
+               abort->Header.Tag.upper, abort->Header.Tag.lower);
+       return rc;
+}
+
+/*
+ * hpsa_find_cmd_in_queue
+ *
+ * Used to determine whether a command (find) is still present
+ * in queue_head.   Optionally excludes the last element of queue_head.
+ *
+ * This is used to avoid unnecessary aborts.  Commands in h->reqQ have
+ * not yet been submitted, and so can be aborted by the driver without
+ * sending an abort to the hardware.
+ *
+ * Returns pointer to command if found in queue, NULL otherwise.
+ */
+static struct CommandList *hpsa_find_cmd_in_queue(struct ctlr_info *h,
+                       struct scsi_cmnd *find, struct list_head *queue_head)
+{
+       unsigned long flags;
+       struct CommandList *c = NULL;   /* ptr into cmpQ */
+
+       if (!find)
+               return 0;
+       spin_lock_irqsave(&h->lock, flags);
+       list_for_each_entry(c, queue_head, list) {
+               if (c->scsi_cmd == NULL) /* e.g.: passthru ioctl */
+                       continue;
+               if (c->scsi_cmd == find) {
+                       spin_unlock_irqrestore(&h->lock, flags);
+                       return c;
+               }
+       }
+       spin_unlock_irqrestore(&h->lock, flags);
+       return NULL;
+}
+
+static struct CommandList *hpsa_find_cmd_in_queue_by_tag(struct ctlr_info *h,
+                                       u8 *tag, struct list_head *queue_head)
+{
+       unsigned long flags;
+       struct CommandList *c;
+
+       spin_lock_irqsave(&h->lock, flags);
+       list_for_each_entry(c, queue_head, list) {
+               if (memcmp(&c->Header.Tag, tag, 8) != 0)
+                       continue;
+               spin_unlock_irqrestore(&h->lock, flags);
+               return c;
+       }
+       spin_unlock_irqrestore(&h->lock, flags);
+       return NULL;
+}
+
+/* Some Smart Arrays need the abort tag swizzled, and some don't.  It's hard to
+ * tell which kind we're dealing with, so we send the abort both ways.  There
+ * shouldn't be any collisions between swizzled and unswizzled tags due to the
+ * way we construct our tags but we check anyway in case the assumptions which
+ * make this true someday become false.
+ */
+static int hpsa_send_abort_both_ways(struct ctlr_info *h,
+       unsigned char *scsi3addr, struct CommandList *abort)
+{
+       u8 swizzled_tag[8];
+       struct CommandList *c;
+       int rc = 0, rc2 = 0;
+
+       /* we do not expect to find the swizzled tag in our queue, but
+        * check anyway just to be sure the assumptions which make this
+        * the case haven't become wrong.
+        */
+       memcpy(swizzled_tag, &abort->Request.CDB[4], 8);
+       swizzle_abort_tag(swizzled_tag);
+       c = hpsa_find_cmd_in_queue_by_tag(h, swizzled_tag, &h->cmpQ);
+       if (c != NULL) {
+               dev_warn(&h->pdev->dev, "Unexpectedly found byte-swapped tag in completion queue.\n");
+               return hpsa_send_abort(h, scsi3addr, abort, 0);
+       }
+       rc = hpsa_send_abort(h, scsi3addr, abort, 0);
+
+       /* if the command is still in our queue, we can't conclude that it was
+        * aborted (it might have just completed normally) but in any case
+        * we don't need to try to abort it another way.
+        */
+       c = hpsa_find_cmd_in_queue(h, abort->scsi_cmd, &h->cmpQ);
+       if (c)
+               rc2 = hpsa_send_abort(h, scsi3addr, abort, 1);
+       return rc && rc2;
+}
+
+/* Send an abort for the specified command.
+ *     If the device and controller support it,
+ *             send a task abort request.
+ */
+static int hpsa_eh_abort_handler(struct scsi_cmnd *sc)
+{
+
+       int i, rc;
+       struct ctlr_info *h;
+       struct hpsa_scsi_dev_t *dev;
+       struct CommandList *abort; /* pointer to command to be aborted */
+       struct CommandList *found;
+       struct scsi_cmnd *as;   /* ptr to scsi cmd inside aborted command. */
+       char msg[256];          /* For debug messaging. */
+       int ml = 0;
+
+       /* Find the controller of the command to be aborted */
+       h = sdev_to_hba(sc->device);
+       if (WARN(h == NULL,
+                       "ABORT REQUEST FAILED, Controller lookup failed.\n"))
+               return FAILED;
+
+       /* Check that controller supports some kind of task abort */
+       if (!(HPSATMF_PHYS_TASK_ABORT & h->TMFSupportFlags) &&
+               !(HPSATMF_LOG_TASK_ABORT & h->TMFSupportFlags))
+               return FAILED;
+
+       memset(msg, 0, sizeof(msg));
+       ml += sprintf(msg+ml, "ABORT REQUEST on C%d:B%d:T%d:L%d ",
+               h->scsi_host->host_no, sc->device->channel,
+               sc->device->id, sc->device->lun);
+
+       /* Find the device of the command to be aborted */
+       dev = sc->device->hostdata;
+       if (!dev) {
+               dev_err(&h->pdev->dev, "%s FAILED, Device lookup failed.\n",
+                               msg);
+               return FAILED;
+       }
+
+       /* Get SCSI command to be aborted */
+       abort = (struct CommandList *) sc->host_scribble;
+       if (abort == NULL) {
+               dev_err(&h->pdev->dev, "%s FAILED, Command to abort is NULL.\n",
+                               msg);
+               return FAILED;
+       }
+
+       ml += sprintf(msg+ml, "Tag:0x%08x:%08x ",
+               abort->Header.Tag.upper, abort->Header.Tag.lower);
+       as  = (struct scsi_cmnd *) abort->scsi_cmd;
+       if (as != NULL)
+               ml += sprintf(msg+ml, "Command:0x%x SN:0x%lx ",
+                       as->cmnd[0], as->serial_number);
+       dev_dbg(&h->pdev->dev, "%s\n", msg);
+       dev_warn(&h->pdev->dev, "Abort request on C%d:B%d:T%d:L%d\n",
+               h->scsi_host->host_no, dev->bus, dev->target, dev->lun);
+
+       /* Search reqQ to See if command is queued but not submitted,
+        * if so, complete the command with aborted status and remove
+        * it from the reqQ.
+        */
+       found = hpsa_find_cmd_in_queue(h, sc, &h->reqQ);
+       if (found) {
+               found->err_info->CommandStatus = CMD_ABORTED;
+               finish_cmd(found);
+               dev_info(&h->pdev->dev, "%s Request SUCCEEDED (driver queue).\n",
+                               msg);
+               return SUCCESS;
+       }
+
+       /* not in reqQ, if also not in cmpQ, must have already completed */
+       found = hpsa_find_cmd_in_queue(h, sc, &h->cmpQ);
+       if (!found)  {
+               dev_dbg(&h->pdev->dev, "%s Request FAILED (not known to driver).\n",
+                               msg);
+               return SUCCESS;
+       }
+
+       /*
+        * Command is in flight, or possibly already completed
+        * by the firmware (but not to the scsi mid layer) but we can't
+        * distinguish which.  Send the abort down.
+        */
+       rc = hpsa_send_abort_both_ways(h, dev->scsi3addr, abort);
+       if (rc != 0) {
+               dev_dbg(&h->pdev->dev, "%s Request FAILED.\n", msg);
+               dev_warn(&h->pdev->dev, "FAILED abort on device C%d:B%d:T%d:L%d\n",
+                       h->scsi_host->host_no,
+                       dev->bus, dev->target, dev->lun);
+               return FAILED;
+       }
+       dev_info(&h->pdev->dev, "%s REQUEST SUCCEEDED.\n", msg);
+
+       /* If the abort(s) above completed and actually aborted the
+        * command, then the command to be aborted should already be
+        * completed.  If not, wait around a bit more to see if they
+        * manage to complete normally.
+        */
+#define ABORT_COMPLETE_WAIT_SECS 30
+       for (i = 0; i < ABORT_COMPLETE_WAIT_SECS * 10; i++) {
+               found = hpsa_find_cmd_in_queue(h, sc, &h->cmpQ);
+               if (!found)
+                       return SUCCESS;
+               msleep(100);
+       }
+       dev_warn(&h->pdev->dev, "%s FAILED. Aborted command has not completed after %d seconds.\n",
+               msg, ABORT_COMPLETE_WAIT_SECS);
+       return FAILED;
+}
+
+
 /*
  * For operations that cannot sleep, a command block is allocated at init,
  * and managed by cmd_alloc() and cmd_free() using a simple bitmap to track
@@ -2346,14 +2659,21 @@ static struct CommandList *cmd_alloc(struct ctlr_info *h)
        int i;
        union u64bit temp64;
        dma_addr_t cmd_dma_handle, err_dma_handle;
+       unsigned long flags;
 
+       spin_lock_irqsave(&h->lock, flags);
        do {
                i = find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds);
-               if (i == h->nr_cmds)
+               if (i == h->nr_cmds) {
+                       spin_unlock_irqrestore(&h->lock, flags);
                        return NULL;
+               }
        } while (test_and_set_bit
                 (i & (BITS_PER_LONG - 1),
                  h->cmd_pool_bits + (i / BITS_PER_LONG)) != 0);
+       h->nr_allocs++;
+       spin_unlock_irqrestore(&h->lock, flags);
+
        c = h->cmd_pool + i;
        memset(c, 0, sizeof(*c));
        cmd_dma_handle = h->cmd_pool_dhandle
@@ -2362,7 +2682,6 @@ static struct CommandList *cmd_alloc(struct ctlr_info *h)
        memset(c->err_info, 0, sizeof(*c->err_info));
        err_dma_handle = h->errinfo_pool_dhandle
            + i * sizeof(*c->err_info);
-       h->nr_allocs++;
 
        c->cmdindex = i;
 
@@ -2418,11 +2737,14 @@ static struct CommandList *cmd_special_alloc(struct ctlr_info *h)
 static void cmd_free(struct ctlr_info *h, struct CommandList *c)
 {
        int i;
+       unsigned long flags;
 
        i = c - h->cmd_pool;
+       spin_lock_irqsave(&h->lock, flags);
        clear_bit(i & (BITS_PER_LONG - 1),
                  h->cmd_pool_bits + (i / BITS_PER_LONG));
        h->nr_frees++;
+       spin_unlock_irqrestore(&h->lock, flags);
 }
 
 static void cmd_special_free(struct ctlr_info *h, struct CommandList *c)
@@ -2866,6 +3188,7 @@ static void fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
        int cmd_type)
 {
        int pci_dir = XFER_NONE;
+       struct CommandList *a; /* for commands to be aborted */
 
        c->cmd_type = CMD_IOCTL_PEND;
        c->Header.ReplyQueue = 0;
@@ -2949,8 +3272,35 @@ static void fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
                        c->Request.CDB[5] = 0x00;
                        c->Request.CDB[6] = 0x00;
                        c->Request.CDB[7] = 0x00;
+                       break;
+               case  HPSA_ABORT_MSG:
+                       a = buff;       /* point to command to be aborted */
+                       dev_dbg(&h->pdev->dev, "Abort Tag:0x%08x:%08x using request Tag:0x%08x:%08x\n",
+                               a->Header.Tag.upper, a->Header.Tag.lower,
+                               c->Header.Tag.upper, c->Header.Tag.lower);
+                       c->Request.CDBLen = 16;
+                       c->Request.Type.Type = TYPE_MSG;
+                       c->Request.Type.Attribute = ATTR_SIMPLE;
+                       c->Request.Type.Direction = XFER_WRITE;
+                       c->Request.Timeout = 0; /* Don't time out */
+                       c->Request.CDB[0] = HPSA_TASK_MANAGEMENT;
+                       c->Request.CDB[1] = HPSA_TMF_ABORT_TASK;
+                       c->Request.CDB[2] = 0x00; /* reserved */
+                       c->Request.CDB[3] = 0x00; /* reserved */
+                       /* Tag to abort goes in CDB[4]-CDB[11] */
+                       c->Request.CDB[4] = a->Header.Tag.lower & 0xFF;
+                       c->Request.CDB[5] = (a->Header.Tag.lower >> 8) & 0xFF;
+                       c->Request.CDB[6] = (a->Header.Tag.lower >> 16) & 0xFF;
+                       c->Request.CDB[7] = (a->Header.Tag.lower >> 24) & 0xFF;
+                       c->Request.CDB[8] = a->Header.Tag.upper & 0xFF;
+                       c->Request.CDB[9] = (a->Header.Tag.upper >> 8) & 0xFF;
+                       c->Request.CDB[10] = (a->Header.Tag.upper >> 16) & 0xFF;
+                       c->Request.CDB[11] = (a->Header.Tag.upper >> 24) & 0xFF;
+                       c->Request.CDB[12] = 0x00; /* reserved */
+                       c->Request.CDB[13] = 0x00; /* reserved */
+                       c->Request.CDB[14] = 0x00; /* reserved */
+                       c->Request.CDB[15] = 0x00; /* reserved */
                break;
-
                default:
                        dev_warn(&h->pdev->dev, "unknown message type %d\n",
                                cmd);
@@ -2998,7 +3348,9 @@ static void __iomem *remap_pci_mem(ulong base, ulong size)
 static void start_io(struct ctlr_info *h)
 {
        struct CommandList *c;
+       unsigned long flags;
 
+       spin_lock_irqsave(&h->lock, flags);
        while (!list_empty(&h->reqQ)) {
                c = list_entry(h->reqQ.next, struct CommandList, list);
                /* can't do anything if fifo is full */
@@ -3011,17 +3363,28 @@ static void start_io(struct ctlr_info *h)
                removeQ(c);
                h->Qdepth--;
 
-               /* Tell the controller execute command */
-               h->access.submit_command(h, c);
-
                /* Put job onto the completed Q */
                addQ(&h->cmpQ, c);
+
+               /* Must increment commands_outstanding before unlocking
+                * and submitting to avoid race checking for fifo full
+                * condition.
+                */
+               h->commands_outstanding++;
+               if (h->commands_outstanding > h->max_outstanding)
+                       h->max_outstanding = h->commands_outstanding;
+
+               /* Tell the controller execute command */
+               spin_unlock_irqrestore(&h->lock, flags);
+               h->access.submit_command(h, c);
+               spin_lock_irqsave(&h->lock, flags);
        }
+       spin_unlock_irqrestore(&h->lock, flags);
 }
 
-static inline unsigned long get_next_completion(struct ctlr_info *h)
+static inline unsigned long get_next_completion(struct ctlr_info *h, u8 q)
 {
-       return h->access.command_completed(h);
+       return h->access.command_completed(h, q);
 }
 
 static inline bool interrupt_pending(struct ctlr_info *h)
@@ -3045,9 +3408,14 @@ static inline int bad_tag(struct ctlr_info *h, u32 tag_index,
        return 0;
 }
 
-static inline void finish_cmd(struct CommandList *c, u32 raw_tag)
+static inline void finish_cmd(struct CommandList *c)
 {
+       unsigned long flags;
+
+       spin_lock_irqsave(&c->h->lock, flags);
        removeQ(c);
+       spin_unlock_irqrestore(&c->h->lock, flags);
+       dial_up_lockup_detection_on_fw_flash_complete(c->h, c);
        if (likely(c->cmd_type == CMD_SCSI))
                complete_scsi_command(c);
        else if (c->cmd_type == CMD_IOCTL_PEND)
@@ -3075,36 +3443,38 @@ static inline u32 hpsa_tag_discard_error_bits(struct ctlr_info *h, u32 tag)
 }
 
 /* process completion of an indexed ("direct lookup") command */
-static inline u32 process_indexed_cmd(struct ctlr_info *h,
+static inline void process_indexed_cmd(struct ctlr_info *h,
        u32 raw_tag)
 {
        u32 tag_index;
        struct CommandList *c;
 
        tag_index = hpsa_tag_to_index(raw_tag);
-       if (bad_tag(h, tag_index, raw_tag))
-               return next_command(h);
-       c = h->cmd_pool + tag_index;
-       finish_cmd(c, raw_tag);
-       return next_command(h);
+       if (!bad_tag(h, tag_index, raw_tag)) {
+               c = h->cmd_pool + tag_index;
+               finish_cmd(c);
+       }
 }
 
 /* process completion of a non-indexed command */
-static inline u32 process_nonindexed_cmd(struct ctlr_info *h,
+static inline void process_nonindexed_cmd(struct ctlr_info *h,
        u32 raw_tag)
 {
        u32 tag;
        struct CommandList *c = NULL;
+       unsigned long flags;
 
        tag = hpsa_tag_discard_error_bits(h, raw_tag);
+       spin_lock_irqsave(&h->lock, flags);
        list_for_each_entry(c, &h->cmpQ, list) {
                if ((c->busaddr & 0xFFFFFFE0) == (tag & 0xFFFFFFE0)) {
-                       finish_cmd(c, raw_tag);
-                       return next_command(h);
+                       spin_unlock_irqrestore(&h->lock, flags);
+                       finish_cmd(c);
+                       return;
                }
        }
+       spin_unlock_irqrestore(&h->lock, flags);
        bad_tag(h, h->nr_cmds + 1, raw_tag);
-       return next_command(h);
 }
 
 /* Some controllers, like p400, will give us one interrupt
@@ -3126,10 +3496,20 @@ static int ignore_bogus_interrupt(struct ctlr_info *h)
        return 1;
 }
 
-static irqreturn_t hpsa_intx_discard_completions(int irq, void *dev_id)
+/*
+ * Convert &h->q[x] (passed to interrupt handlers) back to h.
+ * Relies on (h-q[x] == x) being true for x such that
+ * 0 <= x < MAX_REPLY_QUEUES.
+ */
+static struct ctlr_info *queue_to_hba(u8 *queue)
 {
-       struct ctlr_info *h = dev_id;
-       unsigned long flags;
+       return container_of((queue - *queue), struct ctlr_info, q[0]);
+}
+
+static irqreturn_t hpsa_intx_discard_completions(int irq, void *queue)
+{
+       struct ctlr_info *h = queue_to_hba(queue);
+       u8 q = *(u8 *) queue;
        u32 raw_tag;
 
        if (ignore_bogus_interrupt(h))
@@ -3137,74 +3517,68 @@ static irqreturn_t hpsa_intx_discard_completions(int irq, void *dev_id)
 
        if (interrupt_not_for_us(h))
                return IRQ_NONE;
-       spin_lock_irqsave(&h->lock, flags);
        h->last_intr_timestamp = get_jiffies_64();
        while (interrupt_pending(h)) {
-               raw_tag = get_next_completion(h);
+               raw_tag = get_next_completion(h, q);
                while (raw_tag != FIFO_EMPTY)
-                       raw_tag = next_command(h);
+                       raw_tag = next_command(h, q);
        }
-       spin_unlock_irqrestore(&h->lock, flags);
        return IRQ_HANDLED;
 }
 
-static irqreturn_t hpsa_msix_discard_completions(int irq, void *dev_id)
+static irqreturn_t hpsa_msix_discard_completions(int irq, void *queue)
 {
-       struct ctlr_info *h = dev_id;
-       unsigned long flags;
+       struct ctlr_info *h = queue_to_hba(queue);
        u32 raw_tag;
+       u8 q = *(u8 *) queue;
 
        if (ignore_bogus_interrupt(h))
                return IRQ_NONE;
 
-       spin_lock_irqsave(&h->lock, flags);
        h->last_intr_timestamp = get_jiffies_64();
-       raw_tag = get_next_completion(h);
+       raw_tag = get_next_completion(h, q);
        while (raw_tag != FIFO_EMPTY)
-               raw_tag = next_command(h);
-       spin_unlock_irqrestore(&h->lock, flags);
+               raw_tag = next_command(h, q);
        return IRQ_HANDLED;
 }
 
-static irqreturn_t do_hpsa_intr_intx(int irq, void *dev_id)
+static irqreturn_t do_hpsa_intr_intx(int irq, void *queue)
 {
-       struct ctlr_info *h = dev_id;
-       unsigned long flags;
+       struct ctlr_info *h = queue_to_hba((u8 *) queue);
        u32 raw_tag;
+       u8 q = *(u8 *) queue;
 
        if (interrupt_not_for_us(h))
                return IRQ_NONE;
-       spin_lock_irqsave(&h->lock, flags);
        h->last_intr_timestamp = get_jiffies_64();
        while (interrupt_pending(h)) {
-               raw_tag = get_next_completion(h);
+               raw_tag = get_next_completion(h, q);
                while (raw_tag != FIFO_EMPTY) {
-                       if (hpsa_tag_contains_index(raw_tag))
-                               raw_tag = process_indexed_cmd(h, raw_tag);
+                       if (likely(hpsa_tag_contains_index(raw_tag)))
+                               process_indexed_cmd(h, raw_tag);
                        else
-                               raw_tag = process_nonindexed_cmd(h, raw_tag);
+                               process_nonindexed_cmd(h, raw_tag);
+                       raw_tag = next_command(h, q);
                }
        }
-       spin_unlock_irqrestore(&h->lock, flags);
        return IRQ_HANDLED;
 }
 
-static irqreturn_t do_hpsa_intr_msi(int irq, void *dev_id)
+static irqreturn_t do_hpsa_intr_msi(int irq, void *queue)
 {
-       struct ctlr_info *h = dev_id;
-       unsigned long flags;
+       struct ctlr_info *h = queue_to_hba(queue);
        u32 raw_tag;
+       u8 q = *(u8 *) queue;
 
-       spin_lock_irqsave(&h->lock, flags);
        h->last_intr_timestamp = get_jiffies_64();
-       raw_tag = get_next_completion(h);
+       raw_tag = get_next_completion(h, q);
        while (raw_tag != FIFO_EMPTY) {
-               if (hpsa_tag_contains_index(raw_tag))
-                       raw_tag = process_indexed_cmd(h, raw_tag);
+               if (likely(hpsa_tag_contains_index(raw_tag)))
+                       process_indexed_cmd(h, raw_tag);
                else
-                       raw_tag = process_nonindexed_cmd(h, raw_tag);
+                       process_nonindexed_cmd(h, raw_tag);
+               raw_tag = next_command(h, q);
        }
-       spin_unlock_irqrestore(&h->lock, flags);
        return IRQ_HANDLED;
 }
 
@@ -3638,10 +4012,13 @@ static int find_PCI_BAR_index(struct pci_dev *pdev, unsigned long pci_bar_addr)
 static void __devinit hpsa_interrupt_mode(struct ctlr_info *h)
 {
 #ifdef CONFIG_PCI_MSI
-       int err;
-       struct msix_entry hpsa_msix_entries[4] = { {0, 0}, {0, 1},
-       {0, 2}, {0, 3}
-       };
+       int err, i;
+       struct msix_entry hpsa_msix_entries[MAX_REPLY_QUEUES];
+
+       for (i = 0; i < MAX_REPLY_QUEUES; i++) {
+               hpsa_msix_entries[i].vector = 0;
+               hpsa_msix_entries[i].entry = i;
+       }
 
        /* Some boards advertise MSI but don't really support it */
        if ((h->board_id == 0x40700E11) || (h->board_id == 0x40800E11) ||
@@ -3649,12 +4026,11 @@ static void __devinit hpsa_interrupt_mode(struct ctlr_info *h)
                goto default_int_mode;
        if (pci_find_capability(h->pdev, PCI_CAP_ID_MSIX)) {
                dev_info(&h->pdev->dev, "MSIX\n");
-               err = pci_enable_msix(h->pdev, hpsa_msix_entries, 4);
+               err = pci_enable_msix(h->pdev, hpsa_msix_entries,
+                                               MAX_REPLY_QUEUES);
                if (!err) {
-                       h->intr[0] = hpsa_msix_entries[0].vector;
-                       h->intr[1] = hpsa_msix_entries[1].vector;
-                       h->intr[2] = hpsa_msix_entries[2].vector;
-                       h->intr[3] = hpsa_msix_entries[3].vector;
+                       for (i = 0; i < MAX_REPLY_QUEUES; i++)
+                               h->intr[i] = hpsa_msix_entries[i].vector;
                        h->msix_vector = 1;
                        return;
                }
@@ -3705,14 +4081,6 @@ static int __devinit hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id)
        return ARRAY_SIZE(products) - 1; /* generic unknown smart array */
 }
 
-static inline bool hpsa_board_disabled(struct pci_dev *pdev)
-{
-       u16 command;
-
-       (void) pci_read_config_word(pdev, PCI_COMMAND, &command);
-       return ((command & PCI_COMMAND_MEMORY) == 0);
-}
-
 static int __devinit hpsa_pci_find_memory_BAR(struct pci_dev *pdev,
        unsigned long *memory_bar)
 {
@@ -3838,14 +4206,14 @@ static void __devinit hpsa_find_board_params(struct ctlr_info *h)
                h->maxsgentries = 31; /* default to traditional values */
                h->chainsize = 0;
        }
+
+       /* Find out what task management functions are supported and cache */
+       h->TMFSupportFlags = readl(&(h->cfgtable->TMFSupportFlags));
 }
 
 static inline bool hpsa_CISS_signature_present(struct ctlr_info *h)
 {
-       if ((readb(&h->cfgtable->Signature[0]) != 'C') ||
-           (readb(&h->cfgtable->Signature[1]) != 'I') ||
-           (readb(&h->cfgtable->Signature[2]) != 'S') ||
-           (readb(&h->cfgtable->Signature[3]) != 'S')) {
+       if (!check_signature(h->cfgtable->Signature, "CISS", 4)) {
                dev_warn(&h->pdev->dev, "not a valid CISS config table\n");
                return false;
        }
@@ -3932,11 +4300,6 @@ static int __devinit hpsa_pci_init(struct ctlr_info *h)
        h->product_name = products[prod_index].product_name;
        h->access = *(products[prod_index].access);
 
-       if (hpsa_board_disabled(h->pdev)) {
-               dev_warn(&h->pdev->dev, "controller appears to be disabled\n");
-               return -ENODEV;
-       }
-
        pci_disable_link_state(h->pdev, PCIE_LINK_STATE_L0S |
                               PCIE_LINK_STATE_L1 | PCIE_LINK_STATE_CLKPM);
 
@@ -3946,6 +4309,9 @@ static int __devinit hpsa_pci_init(struct ctlr_info *h)
                return err;
        }
 
+       /* Enable bus mastering (pci_disable_device may disable this) */
+       pci_set_master(h->pdev);
+
        err = pci_request_regions(h->pdev, HPSA);
        if (err) {
                dev_err(&h->pdev->dev,
@@ -3987,10 +4353,7 @@ err_out_free_res:
                iounmap(h->cfgtable);
        if (h->vaddr)
                iounmap(h->vaddr);
-       /*
-        * Deliberately omit pci_disable_device(): it does something nasty to
-        * Smart Array controllers that pci_enable_device does not undo
-        */
+       pci_disable_device(h->pdev);
        pci_release_regions(h->pdev);
        return err;
 }
@@ -4081,14 +4444,33 @@ static int hpsa_request_irq(struct ctlr_info *h,
        irqreturn_t (*msixhandler)(int, void *),
        irqreturn_t (*intxhandler)(int, void *))
 {
-       int rc;
+       int rc, i;
 
-       if (h->msix_vector || h->msi_vector)
-               rc = request_irq(h->intr[h->intr_mode], msixhandler,
-                               0, h->devname, h);
-       else
-               rc = request_irq(h->intr[h->intr_mode], intxhandler,
-                               IRQF_SHARED, h->devname, h);
+       /*
+        * initialize h->q[x] = x so that interrupt handlers know which
+        * queue to process.
+        */
+       for (i = 0; i < MAX_REPLY_QUEUES; i++)
+               h->q[i] = (u8) i;
+
+       if (h->intr_mode == PERF_MODE_INT && h->msix_vector) {
+               /* If performant mode and MSI-X, use multiple reply queues */
+               for (i = 0; i < MAX_REPLY_QUEUES; i++)
+                       rc = request_irq(h->intr[i], msixhandler,
+                                       0, h->devname,
+                                       &h->q[i]);
+       } else {
+               /* Use single reply pool */
+               if (h->msix_vector || h->msi_vector) {
+                       rc = request_irq(h->intr[h->intr_mode],
+                               msixhandler, 0, h->devname,
+                               &h->q[h->intr_mode]);
+               } else {
+                       rc = request_irq(h->intr[h->intr_mode],
+                               intxhandler, IRQF_SHARED, h->devname,
+                               &h->q[h->intr_mode]);
+               }
+       }
        if (rc) {
                dev_err(&h->pdev->dev, "unable to get irq %d for %s\n",
                       h->intr[h->intr_mode], h->devname);
@@ -4121,15 +4503,38 @@ static int __devinit hpsa_kdump_soft_reset(struct ctlr_info *h)
        return 0;
 }
 
-static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
+static void free_irqs(struct ctlr_info *h)
 {
-       free_irq(h->intr[h->intr_mode], h);
+       int i;
+
+       if (!h->msix_vector || h->intr_mode != PERF_MODE_INT) {
+               /* Single reply queue, only one irq to free */
+               i = h->intr_mode;
+               free_irq(h->intr[i], &h->q[i]);
+               return;
+       }
+
+       for (i = 0; i < MAX_REPLY_QUEUES; i++)
+               free_irq(h->intr[i], &h->q[i]);
+}
+
+static void hpsa_free_irqs_and_disable_msix(struct ctlr_info *h)
+{
+       free_irqs(h);
 #ifdef CONFIG_PCI_MSI
-       if (h->msix_vector)
-               pci_disable_msix(h->pdev);
-       else if (h->msi_vector)
-               pci_disable_msi(h->pdev);
+       if (h->msix_vector) {
+               if (h->pdev->msix_enabled)
+                       pci_disable_msix(h->pdev);
+       } else if (h->msi_vector) {
+               if (h->pdev->msi_enabled)
+                       pci_disable_msi(h->pdev);
+       }
 #endif /* CONFIG_PCI_MSI */
+}
+
+static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
+{
+       hpsa_free_irqs_and_disable_msix(h);
        hpsa_free_sg_chain_blocks(h);
        hpsa_free_cmd_pool(h);
        kfree(h->blockFetchTable);
@@ -4165,7 +4570,7 @@ static void fail_all_cmds_on_list(struct ctlr_info *h, struct list_head *list)
        while (!list_empty(list)) {
                c = list_entry(list->next, struct CommandList, list);
                c->err_info->CommandStatus = CMD_HARDWARE_ERR;
-               finish_cmd(c, c->Header.Tag.lower);
+               finish_cmd(c);
        }
 }
 
@@ -4188,9 +4593,6 @@ static void controller_lockup_detected(struct ctlr_info *h)
        spin_unlock_irqrestore(&h->lock, flags);
 }
 
-#define HEARTBEAT_SAMPLE_INTERVAL (10 * HZ)
-#define HEARTBEAT_CHECK_MINIMUM_INTERVAL (HEARTBEAT_SAMPLE_INTERVAL / 2)
-
 static void detect_controller_lockup(struct ctlr_info *h)
 {
        u64 now;
@@ -4201,7 +4603,7 @@ static void detect_controller_lockup(struct ctlr_info *h)
        now = get_jiffies_64();
        /* If we've received an interrupt recently, we're ok. */
        if (time_after64(h->last_intr_timestamp +
-                               (HEARTBEAT_CHECK_MINIMUM_INTERVAL), now))
+                               (h->heartbeat_sample_interval), now))
                return;
 
        /*
@@ -4210,7 +4612,7 @@ static void detect_controller_lockup(struct ctlr_info *h)
         * otherwise don't care about signals in this thread.
         */
        if (time_after64(h->last_heartbeat_timestamp +
-                               (HEARTBEAT_CHECK_MINIMUM_INTERVAL), now))
+                               (h->heartbeat_sample_interval), now))
                return;
 
        /* If heartbeat has not changed since we last looked, we're not ok. */
@@ -4252,6 +4654,7 @@ static void add_ctlr_to_lockup_detector_list(struct ctlr_info *h)
 {
        unsigned long flags;
 
+       h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
        spin_lock_irqsave(&lockup_detector_lock, flags);
        list_add_tail(&h->lockup_list, &hpsa_ctlr_list);
        spin_unlock_irqrestore(&lockup_detector_lock, flags);
@@ -4391,7 +4794,7 @@ reinit_after_soft_reset:
                spin_lock_irqsave(&h->lock, flags);
                h->access.set_intr_mask(h, HPSA_INTR_OFF);
                spin_unlock_irqrestore(&h->lock, flags);
-               free_irq(h->intr[h->intr_mode], h);
+               free_irqs(h);
                rc = hpsa_request_irq(h, hpsa_msix_discard_completions,
                                        hpsa_intx_discard_completions);
                if (rc) {
@@ -4441,7 +4844,7 @@ reinit_after_soft_reset:
 clean4:
        hpsa_free_sg_chain_blocks(h);
        hpsa_free_cmd_pool(h);
-       free_irq(h->intr[h->intr_mode], h);
+       free_irqs(h);
 clean2:
 clean1:
        kfree(h);
@@ -4484,13 +4887,7 @@ static void hpsa_shutdown(struct pci_dev *pdev)
         */
        hpsa_flush_cache(h);
        h->access.set_intr_mask(h, HPSA_INTR_OFF);
-       free_irq(h->intr[h->intr_mode], h);
-#ifdef CONFIG_PCI_MSI
-       if (h->msix_vector)
-               pci_disable_msix(h->pdev);
-       else if (h->msi_vector)
-               pci_disable_msi(h->pdev);
-#endif                         /* CONFIG_PCI_MSI */
+       hpsa_free_irqs_and_disable_msix(h);
 }
 
 static void __devexit hpsa_free_device_info(struct ctlr_info *h)
@@ -4529,10 +4926,7 @@ static void __devexit hpsa_remove_one(struct pci_dev *pdev)
        kfree(h->cmd_pool_bits);
        kfree(h->blockFetchTable);
        kfree(h->hba_inquiry_data);
-       /*
-        * Deliberately omit pci_disable_device(): it does something nasty to
-        * Smart Array controllers that pci_enable_device does not undo
-        */
+       pci_disable_device(pdev);
        pci_release_regions(pdev);
        pci_set_drvdata(pdev, NULL);
        kfree(h);
@@ -4627,11 +5021,8 @@ static __devinit void hpsa_enter_performant_mode(struct ctlr_info *h,
         * 10 = 6 s/g entry or 24k
         */
 
-       h->reply_pool_wraparound = 1; /* spec: init to 1 */
-
        /* Controller spec: zero out this buffer. */
        memset(h->reply_pool, 0, h->reply_pool_size);
-       h->reply_pool_head = h->reply_pool;
 
        bft[7] = SG_ENTRIES_IN_CMD + 4;
        calc_bucket_map(bft, ARRAY_SIZE(bft),
@@ -4641,12 +5032,19 @@ static __devinit void hpsa_enter_performant_mode(struct ctlr_info *h,
 
        /* size of controller ring buffer */
        writel(h->max_commands, &h->transtable->RepQSize);
-       writel(1, &h->transtable->RepQCount);
+       writel(h->nreply_queues, &h->transtable->RepQCount);
        writel(0, &h->transtable->RepQCtrAddrLow32);
        writel(0, &h->transtable->RepQCtrAddrHigh32);
-       writel(h->reply_pool_dhandle, &h->transtable->RepQAddr0Low32);
-       writel(0, &h->transtable->RepQAddr0High32);
-       writel(CFGTBL_Trans_Performant | use_short_tags,
+
+       for (i = 0; i < h->nreply_queues; i++) {
+               writel(0, &h->transtable->RepQAddr[i].upper);
+               writel(h->reply_pool_dhandle +
+                       (h->max_commands * sizeof(u64) * i),
+                       &h->transtable->RepQAddr[i].lower);
+       }
+
+       writel(CFGTBL_Trans_Performant | use_short_tags |
+               CFGTBL_Trans_enable_directed_msix,
                &(h->cfgtable->HostWrite.TransportRequest));
        writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL);
        hpsa_wait_for_mode_change_ack(h);
@@ -4664,6 +5062,7 @@ static __devinit void hpsa_enter_performant_mode(struct ctlr_info *h,
 static __devinit void hpsa_put_ctlr_into_performant_mode(struct ctlr_info *h)
 {
        u32 trans_support;
+       int i;
 
        if (hpsa_simple_mode)
                return;
@@ -4672,12 +5071,20 @@ static __devinit void hpsa_put_ctlr_into_performant_mode(struct ctlr_info *h)
        if (!(trans_support & PERFORMANT_MODE))
                return;
 
+       h->nreply_queues = h->msix_vector ? MAX_REPLY_QUEUES : 1;
        hpsa_get_max_perf_mode_cmds(h);
        /* Performant mode ring buffer and supporting data structures */
-       h->reply_pool_size = h->max_commands * sizeof(u64);
+       h->reply_pool_size = h->max_commands * sizeof(u64) * h->nreply_queues;
        h->reply_pool = pci_alloc_consistent(h->pdev, h->reply_pool_size,
                                &(h->reply_pool_dhandle));
 
+       for (i = 0; i < h->nreply_queues; i++) {
+               h->reply_queue[i].head = &h->reply_pool[h->max_commands * i];
+               h->reply_queue[i].size = h->max_commands;
+               h->reply_queue[i].wraparound = 1;  /* spec: init to 1 */
+               h->reply_queue[i].current_entry = 0;
+       }
+
        /* Need a block fetch table for performant mode */
        h->blockFetchTable = kmalloc(((SG_ENTRIES_IN_CMD + 1) *
                                sizeof(u32)), GFP_KERNEL);
index 7b28d54..9816479 100644 (file)
@@ -34,7 +34,7 @@ struct access_method {
        void (*set_intr_mask)(struct ctlr_info *h, unsigned long val);
        unsigned long (*fifo_full)(struct ctlr_info *h);
        bool (*intr_pending)(struct ctlr_info *h);
-       unsigned long (*command_completed)(struct ctlr_info *h);
+       unsigned long (*command_completed)(struct ctlr_info *h, u8 q);
 };
 
 struct hpsa_scsi_dev_t {
@@ -48,6 +48,13 @@ struct hpsa_scsi_dev_t {
        unsigned char raid_level;       /* from inquiry page 0xC1 */
 };
 
+struct reply_pool {
+       u64 *head;
+       size_t size;
+       u8 wraparound;
+       u32 current_entry;
+};
+
 struct ctlr_info {
        int     ctlr;
        char    devname[8];
@@ -68,7 +75,7 @@ struct ctlr_info {
 #      define DOORBELL_INT     1
 #      define SIMPLE_MODE_INT  2
 #      define MEMQ_MODE_INT    3
-       unsigned int intr[4];
+       unsigned int intr[MAX_REPLY_QUEUES];
        unsigned int msix_vector;
        unsigned int msi_vector;
        int intr_mode; /* either PERF_MODE_INT or SIMPLE_MODE_INT */
@@ -78,7 +85,6 @@ struct ctlr_info {
        struct list_head reqQ;
        struct list_head cmpQ;
        unsigned int Qdepth;
-       unsigned int maxQsinceinit;
        unsigned int maxSG;
        spinlock_t lock;
        int maxsgentries;
@@ -111,20 +117,45 @@ struct ctlr_info {
        unsigned long transMethod;
 
        /*
-        * Performant mode completion buffer
+        * Performant mode completion buffers
         */
        u64 *reply_pool;
-       dma_addr_t reply_pool_dhandle;
-       u64 *reply_pool_head;
        size_t reply_pool_size;
-       unsigned char reply_pool_wraparound;
+       struct reply_pool reply_queue[MAX_REPLY_QUEUES];
+       u8 nreply_queues;
+       dma_addr_t reply_pool_dhandle;
        u32 *blockFetchTable;
        unsigned char *hba_inquiry_data;
        u64 last_intr_timestamp;
        u32 last_heartbeat;
        u64 last_heartbeat_timestamp;
+       u32 heartbeat_sample_interval;
+       atomic_t firmware_flash_in_progress;
        u32 lockup_detected;
        struct list_head lockup_list;
+       /* Address of h->q[x] is passed to intr handler to know which queue */
+       u8 q[MAX_REPLY_QUEUES];
+       u32 TMFSupportFlags; /* cache what task mgmt funcs are supported. */
+#define HPSATMF_BITS_SUPPORTED  (1 << 0)
+#define HPSATMF_PHYS_LUN_RESET  (1 << 1)
+#define HPSATMF_PHYS_NEX_RESET  (1 << 2)
+#define HPSATMF_PHYS_TASK_ABORT (1 << 3)
+#define HPSATMF_PHYS_TSET_ABORT (1 << 4)
+#define HPSATMF_PHYS_CLEAR_ACA  (1 << 5)
+#define HPSATMF_PHYS_CLEAR_TSET (1 << 6)
+#define HPSATMF_PHYS_QRY_TASK   (1 << 7)
+#define HPSATMF_PHYS_QRY_TSET   (1 << 8)
+#define HPSATMF_PHYS_QRY_ASYNC  (1 << 9)
+#define HPSATMF_MASK_SUPPORTED  (1 << 16)
+#define HPSATMF_LOG_LUN_RESET   (1 << 17)
+#define HPSATMF_LOG_NEX_RESET   (1 << 18)
+#define HPSATMF_LOG_TASK_ABORT  (1 << 19)
+#define HPSATMF_LOG_TSET_ABORT  (1 << 20)
+#define HPSATMF_LOG_CLEAR_ACA   (1 << 21)
+#define HPSATMF_LOG_CLEAR_TSET  (1 << 22)
+#define HPSATMF_LOG_QRY_TASK    (1 << 23)
+#define HPSATMF_LOG_QRY_TSET    (1 << 24)
+#define HPSATMF_LOG_QRY_ASYNC   (1 << 25)
 };
 #define HPSA_ABORT_MSG 0
 #define HPSA_DEVICE_RESET_MSG 1
@@ -216,9 +247,6 @@ static void SA5_submit_command(struct ctlr_info *h,
                c->Header.Tag.lower);
        writel(c->busaddr, h->vaddr + SA5_REQUEST_PORT_OFFSET);
        (void) readl(h->vaddr + SA5_SCRATCHPAD_OFFSET);
-       h->commands_outstanding++;
-       if (h->commands_outstanding > h->max_outstanding)
-               h->max_outstanding = h->commands_outstanding;
 }
 
 /*
@@ -254,16 +282,17 @@ static void SA5_performant_intr_mask(struct ctlr_info *h, unsigned long val)
        }
 }
 
-static unsigned long SA5_performant_completed(struct ctlr_info *h)
+static unsigned long SA5_performant_completed(struct ctlr_info *h, u8 q)
 {
-       unsigned long register_value = FIFO_EMPTY;
+       struct reply_pool *rq = &h->reply_queue[q];
+       unsigned long flags, register_value = FIFO_EMPTY;
 
-       /* flush the controller write of the reply queue by reading
-        * outbound doorbell status register.
-        */
-       register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
        /* msi auto clears the interrupt pending bit. */
        if (!(h->msi_vector || h->msix_vector)) {
+               /* flush the controller write of the reply queue by reading
+                * outbound doorbell status register.
+                */
+               register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
                writel(SA5_OUTDB_CLEAR_PERF_BIT, h->vaddr + SA5_OUTDB_CLEAR);
                /* Do a read in order to flush the write to the controller
                 * (as per spec.)
@@ -271,19 +300,20 @@ static unsigned long SA5_performant_completed(struct ctlr_info *h)
                register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
        }
 
-       if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) {
-               register_value = *(h->reply_pool_head);
-               (h->reply_pool_head)++;
+       if ((rq->head[rq->current_entry] & 1) == rq->wraparound) {
+               register_value = rq->head[rq->current_entry];
+               rq->current_entry++;
+               spin_lock_irqsave(&h->lock, flags);
                h->commands_outstanding--;
+               spin_unlock_irqrestore(&h->lock, flags);
        } else {
                register_value = FIFO_EMPTY;
        }
        /* Check for wraparound */
-       if (h->reply_pool_head == (h->reply_pool + h->max_commands)) {
-               h->reply_pool_head = h->reply_pool;
-               h->reply_pool_wraparound ^= 1;
+       if (rq->current_entry == h->max_commands) {
+               rq->current_entry = 0;
+               rq->wraparound ^= 1;
        }
-
        return register_value;
 }
 
@@ -303,13 +333,18 @@ static unsigned long SA5_fifo_full(struct ctlr_info *h)
  *   returns value read from hardware.
  *     returns FIFO_EMPTY if there is nothing to read
  */
-static unsigned long SA5_completed(struct ctlr_info *h)
+static unsigned long SA5_completed(struct ctlr_info *h,
+       __attribute__((unused)) u8 q)
 {
        unsigned long register_value
                = readl(h->vaddr + SA5_REPLY_PORT_OFFSET);
+       unsigned long flags;
 
-       if (register_value != FIFO_EMPTY)
+       if (register_value != FIFO_EMPTY) {
+               spin_lock_irqsave(&h->lock, flags);
                h->commands_outstanding--;
+               spin_unlock_irqrestore(&h->lock, flags);
+       }
 
 #ifdef HPSA_DEBUG
        if (register_value != FIFO_EMPTY)
index 8049815..a894f2e 100644 (file)
 #define TYPE_CMD                               0x00
 #define TYPE_MSG                               0x01
 
+/* Message Types  */
+#define HPSA_TASK_MANAGEMENT    0x00
+#define HPSA_RESET              0x01
+#define HPSA_SCAN               0x02
+#define HPSA_NOOP               0x03
+
+#define HPSA_CTLR_RESET_TYPE    0x00
+#define HPSA_BUS_RESET_TYPE     0x01
+#define HPSA_TARGET_RESET_TYPE  0x03
+#define HPSA_LUN_RESET_TYPE     0x04
+#define HPSA_NEXUS_RESET_TYPE   0x05
+
+/* Task Management Functions */
+#define HPSA_TMF_ABORT_TASK     0x00
+#define HPSA_TMF_ABORT_TASK_SET 0x01
+#define HPSA_TMF_CLEAR_ACA      0x02
+#define HPSA_TMF_CLEAR_TASK_SET 0x03
+#define HPSA_TMF_QUERY_TASK     0x04
+#define HPSA_TMF_QUERY_TASK_SET 0x05
+#define HPSA_TMF_QUERY_ASYNCEVENT 0x06
+
+
+
 /* config space register offsets */
 #define CFG_VENDORID            0x00
 #define CFG_DEVICEID            0x02
 #define CFGTBL_Trans_Simple     0x00000002l
 #define CFGTBL_Trans_Performant 0x00000004l
 #define CFGTBL_Trans_use_short_tags 0x20000000l
+#define CFGTBL_Trans_enable_directed_msix (1 << 30)
 
 #define CFGTBL_BusType_Ultra2   0x00000001l
 #define CFGTBL_BusType_Ultra3   0x00000002l
@@ -162,6 +186,7 @@ struct SenseSubsystem_info {
 #define BMIC_WRITE 0x27
 #define BMIC_CACHE_FLUSH 0xc2
 #define HPSA_CACHE_FLUSH 0x01  /* C2 was already being used by HPSA */
+#define BMIC_FLASH_FIRMWARE 0xF7
 
 /* Command List Structure */
 union SCSI3Addr {
@@ -337,11 +362,17 @@ struct CfgTable {
        u32             MaxPhysicalDevices;
        u32             MaxPhysicalDrivesPerLogicalUnit;
        u32             MaxPerformantModeCommands;
-       u8              reserved[0x78 - 0x58];
+       u32             MaxBlockFetch;
+       u32             PowerConservationSupport;
+       u32             PowerConservationEnable;
+       u32             TMFSupportFlags;
+       u8              TMFTagMask[8];
+       u8              reserved[0x78 - 0x70];
        u32             misc_fw_support; /* offset 0x78 */
 #define                        MISC_FW_DOORBELL_RESET (0x02)
 #define                        MISC_FW_DOORBELL_RESET2 (0x010)
        u8              driver_version[32];
+
 };
 
 #define NUM_BLOCKFETCH_ENTRIES 8
@@ -351,8 +382,8 @@ struct TransTable_struct {
        u32            RepQCount;
        u32            RepQCtrAddrLow32;
        u32            RepQCtrAddrHigh32;
-       u32            RepQAddr0Low32;
-       u32            RepQAddr0High32;
+#define MAX_REPLY_QUEUES 8
+       struct vals32  RepQAddr[MAX_REPLY_QUEUES];
 };
 
 struct hpsa_pci_info {
index cc83b66..c1402fb 100644 (file)
@@ -648,6 +648,7 @@ int fc_lport_destroy(struct fc_lport *lport)
        lport->tt.fcp_abort_io(lport);
        lport->tt.disc_stop_final(lport);
        lport->tt.exch_mgr_reset(lport, 0, 0);
+       cancel_delayed_work_sync(&lport->retry_work);
        fc_fc4_del_lport(lport);
        return 0;
 }
@@ -1564,7 +1565,6 @@ static void fc_lport_timeout(struct work_struct *work)
 
        switch (lport->state) {
        case LPORT_ST_DISABLED:
-               WARN_ON(1);
                break;
        case LPORT_ST_READY:
                break;
index 3a1ffdd..e5da6da 100644 (file)
@@ -93,6 +93,9 @@ struct lpfc_sli2_slim;
 /* lpfc wait event data ready flag */
 #define LPFC_DATA_READY                (1<<0)
 
+/* queue dump line buffer size */
+#define LPFC_LBUF_SZ           128
+
 enum lpfc_polling_flags {
        ENABLE_FCP_RING_POLLING = 0x1,
        DISABLE_FCP_RING_INT    = 0x2
@@ -620,6 +623,7 @@ struct lpfc_hba {
 #define HBA_AER_ENABLED                0x1000 /* AER enabled with HBA */
 #define HBA_DEVLOSS_TMO         0x2000 /* HBA in devloss timeout */
 #define HBA_RRQ_ACTIVE         0x4000 /* process the rrq active list */
+#define HBA_FCP_IOQ_FLUSH      0x8000 /* FCP I/O queues being flushed */
        uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
        struct lpfc_dmabuf slim2p;
 
index 141e4b4..253d9a8 100644 (file)
@@ -1,7 +1,7 @@
 /*******************************************************************
  * This file is part of the Emulex Linux Device Driver for         *
  * Fibre Channel Host Bus Adapters.                                *
- * Copyright (C) 2009-2011 Emulex.  All rights reserved.           *
+ * Copyright (C) 2009-2012 Emulex.  All rights reserved.           *
  * EMULEX and SLI are trademarks of Emulex.                        *
  * www.emulex.com                                                  *
  *                                                                 *
@@ -599,6 +599,7 @@ lpfc_bsg_rport_els(struct fc_bsg_job *job)
 
        cmdiocbq->iocb_cmpl = lpfc_bsg_rport_els_cmp;
        cmdiocbq->context1 = dd_data;
+       cmdiocbq->context_un.ndlp = ndlp;
        cmdiocbq->context2 = rspiocbq;
        dd_data->type = TYPE_IOCB;
        dd_data->context_un.iocb.cmdiocbq = cmdiocbq;
@@ -3978,6 +3979,7 @@ lpfc_bsg_handle_sli_cfg_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
                } else if (subsys == SLI_CONFIG_SUBSYS_COMN) {
                        switch (opcode) {
                        case COMN_OPCODE_GET_CNTL_ADDL_ATTRIBUTES:
+                       case COMN_OPCODE_GET_CNTL_ATTRIBUTES:
                                lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
                                                "3106 Handled SLI_CONFIG "
                                                "subsys_comn, opcode:x%x\n",
index edfe61f..67f7d0a 100644 (file)
@@ -1,7 +1,7 @@
 /*******************************************************************
  * This file is part of the Emulex Linux Device Driver for         *
  * Fibre Channel Host Bus Adapters.                                *
- * Copyright (C) 2010 Emulex.  All rights reserved.                *
+ * Copyright (C) 2010-2012 Emulex.  All rights reserved.                *
  * EMULEX and SLI are trademarks of Emulex.                        *
  * www.emulex.com                                                  *
  *                                                                 *
@@ -249,6 +249,7 @@ struct lpfc_sli_config_emb1_subsys {
 #define COMN_OPCODE_READ_OBJECT_LIST   0xAD
 #define COMN_OPCODE_DELETE_OBJECT      0xAE
 #define COMN_OPCODE_GET_CNTL_ADDL_ATTRIBUTES   0x79
+#define COMN_OPCODE_GET_CNTL_ATTRIBUTES        0x20
        uint32_t timeout;
        uint32_t request_length;
        uint32_t word9;
index 330dd71..9b2a16f 100644 (file)
@@ -254,6 +254,7 @@ int
 lpfc_sli_handle_fast_ring_event(struct lpfc_hba *,
                        struct lpfc_sli_ring *, uint32_t);
 
+struct lpfc_iocbq *__lpfc_sli_get_iocbq(struct lpfc_hba *);
 struct lpfc_iocbq * lpfc_sli_get_iocbq(struct lpfc_hba *);
 void lpfc_sli_release_iocbq(struct lpfc_hba *, struct lpfc_iocbq *);
 uint16_t lpfc_sli_next_iotag(struct lpfc_hba *, struct lpfc_iocbq *);
@@ -460,6 +461,7 @@ int lpfc_hba_init_link_fc_topology(struct lpfc_hba *, uint32_t, uint32_t);
 int lpfc_issue_reg_vfi(struct lpfc_vport *);
 int lpfc_issue_unreg_vfi(struct lpfc_vport *);
 int lpfc_selective_reset(struct lpfc_hba *);
-int lpfc_sli4_read_config(struct lpfc_hba *phba);
-int lpfc_scsi_buf_update(struct lpfc_hba *phba);
-void lpfc_sli4_node_prep(struct lpfc_hba *phba);
+int lpfc_sli4_read_config(struct lpfc_hba *);
+void lpfc_sli4_node_prep(struct lpfc_hba *);
+int lpfc_sli4_xri_sgl_update(struct lpfc_hba *);
+void lpfc_free_sgl_list(struct lpfc_hba *, struct list_head *);
index af04b0d..3217d63 100644 (file)
@@ -4466,3 +4466,49 @@ lpfc_debugfs_terminate(struct lpfc_vport *vport)
 #endif
        return;
 }
+
+/*
+ * Driver debug utility routines outside of debugfs. The debug utility
+ * routines implemented here is intended to be used in the instrumented
+ * debug driver for debugging host or port issues.
+ */
+
+/**
+ * lpfc_debug_dump_all_queues - dump all the queues with a hba
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps entries of all the queues asociated with the @phba.
+ **/
+void
+lpfc_debug_dump_all_queues(struct lpfc_hba *phba)
+{
+       int fcp_wqidx;
+
+       /*
+        * Dump Work Queues (WQs)
+        */
+       lpfc_debug_dump_mbx_wq(phba);
+       lpfc_debug_dump_els_wq(phba);
+
+       for (fcp_wqidx = 0; fcp_wqidx < phba->cfg_fcp_wq_count; fcp_wqidx++)
+               lpfc_debug_dump_fcp_wq(phba, fcp_wqidx);
+
+       lpfc_debug_dump_hdr_rq(phba);
+       lpfc_debug_dump_dat_rq(phba);
+       /*
+        * Dump Complete Queues (CQs)
+        */
+       lpfc_debug_dump_mbx_cq(phba);
+       lpfc_debug_dump_els_cq(phba);
+
+       for (fcp_wqidx = 0; fcp_wqidx < phba->cfg_fcp_wq_count; fcp_wqidx++)
+               lpfc_debug_dump_fcp_cq(phba, fcp_wqidx);
+
+       /*
+        * Dump Event Queues (EQs)
+        */
+       lpfc_debug_dump_sp_eq(phba);
+
+       for (fcp_wqidx = 0; fcp_wqidx < phba->cfg_fcp_wq_count; fcp_wqidx++)
+               lpfc_debug_dump_fcp_eq(phba, fcp_wqidx);
+}
index f83bd94..616c400 100644 (file)
@@ -267,3 +267,421 @@ struct lpfc_idiag {
 #define LPFC_DISC_TRC_DISCOVERY                0xef    /* common mask for general
                                                 * discovery */
 #endif /* H_LPFC_DEBUG_FS */
+
+
+/*
+ * Driver debug utility routines outside of debugfs. The debug utility
+ * routines implemented here is intended to be used in the instrumented
+ * debug driver for debugging host or port issues.
+ */
+
+/**
+ * lpfc_debug_dump_qe - dump an specific entry from a queue
+ * @q: Pointer to the queue descriptor.
+ * @idx: Index to the entry on the queue.
+ *
+ * This function dumps an entry indexed by @idx from a queue specified by the
+ * queue descriptor @q.
+ **/
+static inline void
+lpfc_debug_dump_qe(struct lpfc_queue *q, uint32_t idx)
+{
+       char line_buf[LPFC_LBUF_SZ];
+       int i, esize, qe_word_cnt, len;
+       uint32_t *pword;
+
+       /* sanity checks */
+       if (!q)
+               return;
+       if (idx >= q->entry_count)
+               return;
+
+       esize = q->entry_size;
+       qe_word_cnt = esize / sizeof(uint32_t);
+       pword = q->qe[idx].address;
+
+       len = 0;
+       len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "QE[%04d]: ", idx);
+       if (qe_word_cnt > 8)
+               printk(KERN_ERR "%s\n", line_buf);
+
+       for (i = 0; i < qe_word_cnt; i++) {
+               if (!(i % 8)) {
+                       if (i != 0)
+                               printk(KERN_ERR "%s\n", line_buf);
+                       if (qe_word_cnt > 8) {
+                               len = 0;
+                               memset(line_buf, 0, LPFC_LBUF_SZ);
+                               len += snprintf(line_buf+len, LPFC_LBUF_SZ-len,
+                                               "%03d: ", i);
+                       }
+               }
+               len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "%08x ",
+                               ((uint32_t)*pword) & 0xffffffff);
+               pword++;
+       }
+       if (qe_word_cnt <= 8 || (i - 1) % 8)
+               printk(KERN_ERR "%s\n", line_buf);
+}
+
+/**
+ * lpfc_debug_dump_q - dump all entries from an specific queue
+ * @q: Pointer to the queue descriptor.
+ *
+ * This function dumps all entries from a queue specified by the queue
+ * descriptor @q.
+ **/
+static inline void
+lpfc_debug_dump_q(struct lpfc_queue *q)
+{
+       int idx, entry_count;
+
+       /* sanity check */
+       if (!q)
+               return;
+
+       dev_printk(KERN_ERR, &(((q->phba))->pcidev)->dev,
+               "%d: [qid:%d, type:%d, subtype:%d, "
+               "qe_size:%d, qe_count:%d, "
+               "host_index:%d, port_index:%d]\n",
+               (q->phba)->brd_no,
+               q->queue_id, q->type, q->subtype,
+               q->entry_size, q->entry_count,
+               q->host_index, q->hba_index);
+       entry_count = q->entry_count;
+       for (idx = 0; idx < entry_count; idx++)
+               lpfc_debug_dump_qe(q, idx);
+       printk(KERN_ERR "\n");
+}
+
+/**
+ * lpfc_debug_dump_fcp_wq - dump all entries from a fcp work queue
+ * @phba: Pointer to HBA context object.
+ * @fcp_wqidx: Index to a FCP work queue.
+ *
+ * This function dumps all entries from a FCP work queue specified by the
+ * @fcp_wqidx.
+ **/
+static inline void
+lpfc_debug_dump_fcp_wq(struct lpfc_hba *phba, int fcp_wqidx)
+{
+       /* sanity check */
+       if (fcp_wqidx >= phba->cfg_fcp_wq_count)
+               return;
+
+       printk(KERN_ERR "FCP WQ: WQ[Idx:%d|Qid:%d]\n",
+               fcp_wqidx, phba->sli4_hba.fcp_wq[fcp_wqidx]->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.fcp_wq[fcp_wqidx]);
+}
+
+/**
+ * lpfc_debug_dump_fcp_cq - dump all entries from a fcp work queue's cmpl queue
+ * @phba: Pointer to HBA context object.
+ * @fcp_wqidx: Index to a FCP work queue.
+ *
+ * This function dumps all entries from a FCP complete queue which is
+ * associated to the FCP work queue specified by the @fcp_wqidx.
+ **/
+static inline void
+lpfc_debug_dump_fcp_cq(struct lpfc_hba *phba, int fcp_wqidx)
+{
+       int fcp_cqidx, fcp_cqid;
+
+       /* sanity check */
+       if (fcp_wqidx >= phba->cfg_fcp_wq_count)
+               return;
+
+       fcp_cqid = phba->sli4_hba.fcp_wq[fcp_wqidx]->assoc_qid;
+       for (fcp_cqidx = 0; fcp_cqidx < phba->cfg_fcp_eq_count; fcp_cqidx++)
+               if (phba->sli4_hba.fcp_cq[fcp_cqidx]->queue_id == fcp_cqid)
+                       break;
+       if (fcp_cqidx >= phba->cfg_fcp_eq_count)
+               return;
+
+       printk(KERN_ERR "FCP CQ: WQ[Idx:%d|Qid%d]->CQ[Idx%d|Qid%d]:\n",
+               fcp_wqidx, phba->sli4_hba.fcp_wq[fcp_wqidx]->queue_id,
+               fcp_cqidx, fcp_cqid);
+       lpfc_debug_dump_q(phba->sli4_hba.fcp_cq[fcp_cqidx]);
+}
+
+/**
+ * lpfc_debug_dump_fcp_eq - dump all entries from a fcp work queue's evt queue
+ * @phba: Pointer to HBA context object.
+ * @fcp_wqidx: Index to a FCP work queue.
+ *
+ * This function dumps all entries from a FCP event queue which is
+ * associated to the FCP work queue specified by the @fcp_wqidx.
+ **/
+static inline void
+lpfc_debug_dump_fcp_eq(struct lpfc_hba *phba, int fcp_wqidx)
+{
+       struct lpfc_queue *qdesc;
+       int fcp_eqidx, fcp_eqid;
+       int fcp_cqidx, fcp_cqid;
+
+       /* sanity check */
+       if (fcp_wqidx >= phba->cfg_fcp_wq_count)
+               return;
+       fcp_cqid = phba->sli4_hba.fcp_wq[fcp_wqidx]->assoc_qid;
+       for (fcp_cqidx = 0; fcp_cqidx < phba->cfg_fcp_eq_count; fcp_cqidx++)
+               if (phba->sli4_hba.fcp_cq[fcp_cqidx]->queue_id == fcp_cqid)
+                       break;
+       if (fcp_cqidx >= phba->cfg_fcp_eq_count)
+               return;
+
+       if (phba->cfg_fcp_eq_count == 0) {
+               fcp_eqidx = -1;
+               fcp_eqid = phba->sli4_hba.sp_eq->queue_id;
+               qdesc = phba->sli4_hba.sp_eq;
+       } else {
+               fcp_eqidx = fcp_cqidx;
+               fcp_eqid = phba->sli4_hba.fp_eq[fcp_eqidx]->queue_id;
+               qdesc = phba->sli4_hba.fp_eq[fcp_eqidx];
+       }
+
+       printk(KERN_ERR "FCP EQ: WQ[Idx:%d|Qid:%d]->CQ[Idx:%d|Qid:%d]->"
+               "EQ[Idx:%d|Qid:%d]\n",
+               fcp_wqidx, phba->sli4_hba.fcp_wq[fcp_wqidx]->queue_id,
+               fcp_cqidx, fcp_cqid, fcp_eqidx, fcp_eqid);
+       lpfc_debug_dump_q(qdesc);
+}
+
+/**
+ * lpfc_debug_dump_els_wq - dump all entries from the els work queue
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps all entries from the ELS work queue.
+ **/
+static inline void
+lpfc_debug_dump_els_wq(struct lpfc_hba *phba)
+{
+       printk(KERN_ERR "ELS WQ: WQ[Qid:%d]:\n",
+               phba->sli4_hba.els_wq->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.els_wq);
+}
+
+/**
+ * lpfc_debug_dump_mbx_wq - dump all entries from the mbox work queue
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps all entries from the MBOX work queue.
+ **/
+static inline void
+lpfc_debug_dump_mbx_wq(struct lpfc_hba *phba)
+{
+       printk(KERN_ERR "MBX WQ: WQ[Qid:%d]\n",
+               phba->sli4_hba.mbx_wq->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.mbx_wq);
+}
+
+/**
+ * lpfc_debug_dump_dat_rq - dump all entries from the receive data queue
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps all entries from the receive data queue.
+ **/
+static inline void
+lpfc_debug_dump_dat_rq(struct lpfc_hba *phba)
+{
+       printk(KERN_ERR "DAT RQ: RQ[Qid:%d]\n",
+               phba->sli4_hba.dat_rq->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.dat_rq);
+}
+
+/**
+ * lpfc_debug_dump_hdr_rq - dump all entries from the receive header queue
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps all entries from the receive header queue.
+ **/
+static inline void
+lpfc_debug_dump_hdr_rq(struct lpfc_hba *phba)
+{
+       printk(KERN_ERR "HDR RQ: RQ[Qid:%d]\n",
+               phba->sli4_hba.hdr_rq->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.hdr_rq);
+}
+
+/**
+ * lpfc_debug_dump_els_cq - dump all entries from the els complete queue
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps all entries from the els complete queue.
+ **/
+static inline void
+lpfc_debug_dump_els_cq(struct lpfc_hba *phba)
+{
+       printk(KERN_ERR "ELS CQ: WQ[Qid:%d]->CQ[Qid:%d]\n",
+               phba->sli4_hba.els_wq->queue_id,
+               phba->sli4_hba.els_cq->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.els_cq);
+}
+
+/**
+ * lpfc_debug_dump_mbx_cq - dump all entries from the mbox complete queue
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps all entries from the mbox complete queue.
+ **/
+static inline void
+lpfc_debug_dump_mbx_cq(struct lpfc_hba *phba)
+{
+       printk(KERN_ERR "MBX CQ: WQ[Qid:%d]->CQ[Qid:%d]\n",
+               phba->sli4_hba.mbx_wq->queue_id,
+               phba->sli4_hba.mbx_cq->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.mbx_cq);
+}
+
+/**
+ * lpfc_debug_dump_sp_eq - dump all entries from slow-path event queue
+ * @phba: Pointer to HBA context object.
+ *
+ * This function dumps all entries from the slow-path event queue.
+ **/
+static inline void
+lpfc_debug_dump_sp_eq(struct lpfc_hba *phba)
+{
+       printk(KERN_ERR "SP EQ: WQ[Qid:%d/Qid:%d]->CQ[Qid:%d/Qid:%d]->"
+               "EQ[Qid:%d]:\n",
+               phba->sli4_hba.mbx_wq->queue_id,
+               phba->sli4_hba.els_wq->queue_id,
+               phba->sli4_hba.mbx_cq->queue_id,
+               phba->sli4_hba.els_cq->queue_id,
+               phba->sli4_hba.sp_eq->queue_id);
+       lpfc_debug_dump_q(phba->sli4_hba.sp_eq);
+}
+
+/**
+ * lpfc_debug_dump_wq_by_id - dump all entries from a work queue by queue id
+ * @phba: Pointer to HBA context object.
+ * @qid: Work queue identifier.
+ *
+ * This function dumps all entries from a work queue identified by the queue
+ * identifier.
+ **/
+static inline void
+lpfc_debug_dump_wq_by_id(struct lpfc_hba *phba, int qid)
+{
+       int wq_idx;
+
+       for (wq_idx = 0; wq_idx < phba->cfg_fcp_wq_count; wq_idx++)
+               if (phba->sli4_hba.fcp_wq[wq_idx]->queue_id == qid)
+                       break;
+       if (wq_idx < phba->cfg_fcp_wq_count) {
+               printk(KERN_ERR "FCP WQ[Idx:%d|Qid:%d]\n", wq_idx, qid);
+               lpfc_debug_dump_q(phba->sli4_hba.fcp_wq[wq_idx]);
+               return;
+       }
+
+       if (phba->sli4_hba.els_wq->queue_id == qid) {
+               printk(KERN_ERR "ELS WQ[Qid:%d]\n", qid);
+               lpfc_debug_dump_q(phba->sli4_hba.els_wq);
+       }
+}
+
+/**
+ * lpfc_debug_dump_mq_by_id - dump all entries from a mbox queue by queue id
+ * @phba: Pointer to HBA context object.
+ * @qid: Mbox work queue identifier.
+ *
+ * This function dumps all entries from a mbox work queue identified by the
+ * queue identifier.
+ **/
+static inline void
+lpfc_debug_dump_mq_by_id(struct lpfc_hba *phba, int qid)
+{
+       if (phba->sli4_hba.mbx_wq->queue_id == qid) {
+               printk(KERN_ERR "MBX WQ[Qid:%d]\n", qid);
+               lpfc_debug_dump_q(phba->sli4_hba.mbx_wq);
+       }
+}
+
+/**
+ * lpfc_debug_dump_rq_by_id - dump all entries from a receive queue by queue id
+ * @phba: Pointer to HBA context object.
+ * @qid: Receive queue identifier.
+ *
+ * This function dumps all entries from a receive queue identified by the
+ * queue identifier.
+ **/
+static inline void
+lpfc_debug_dump_rq_by_id(struct lpfc_hba *phba, int qid)
+{
+       if (phba->sli4_hba.hdr_rq->queue_id == qid) {
+               printk(KERN_ERR "HDR RQ[Qid:%d]\n", qid);
+               lpfc_debug_dump_q(phba->sli4_hba.hdr_rq);
+               return;
+       }
+       if (phba->sli4_hba.dat_rq->queue_id == qid) {
+               printk(KERN_ERR "DAT RQ[Qid:%d]\n", qid);
+               lpfc_debug_dump_q(phba->sli4_hba.dat_rq);
+       }
+}
+
+/**
+ * lpfc_debug_dump_cq_by_id - dump all entries from a cmpl queue by queue id
+ * @phba: Pointer to HBA context object.
+ * @qid: Complete queue identifier.
+ *
+ * This function dumps all entries from a complete queue identified by the
+ * queue identifier.
+ **/
+static inline void
+lpfc_debug_dump_cq_by_id(struct lpfc_hba *phba, int qid)
+{
+       int cq_idx = 0;
+
+       do {
+               if (phba->sli4_hba.fcp_cq[cq_idx]->queue_id == qid)
+                       break;
+       } while (++cq_idx < phba->cfg_fcp_eq_count);
+
+       if (cq_idx < phba->cfg_fcp_eq_count) {
+               printk(KERN_ERR "FCP CQ[Idx:%d|Qid:%d]\n", cq_idx, qid);
+               lpfc_debug_dump_q(phba->sli4_hba.fcp_cq[cq_idx]);
+               return;
+       }
+
+       if (phba->sli4_hba.els_cq->queue_id == qid) {
+               printk(KERN_ERR "ELS CQ[Qid:%d]\n", qid);
+               lpfc_debug_dump_q(phba->sli4_hba.els_cq);
+               return;
+       }
+
+       if (phba->sli4_hba.mbx_cq->queue_id == qid) {
+               printk(KERN_ERR "MBX CQ[Qid:%d]\n", qid);
+               lpfc_debug_dump_q(phba->sli4_hba.mbx_cq);
+       }
+}
+
+/**
+ * lpfc_debug_dump_eq_by_id - dump all entries from an event queue by queue id
+ * @phba: Pointer to HBA context object.
+ * @qid: Complete queue identifier.
+ *
+ * This function dumps all entries from an event queue identified by the
+ * queue identifier.
+ **/
+static inline void
+lpfc_debug_dump_eq_by_id(struct lpfc_hba *phba, int qid)
+{
+       int eq_idx;
+
+       for (eq_idx = 0; eq_idx < phba->cfg_fcp_eq_count; eq_idx++) {
+               if (phba->sli4_hba.fp_eq[eq_idx]->queue_id == qid)
+                       break;
+       }
+
+       if (eq_idx < phba->cfg_fcp_eq_count) {
+               printk(KERN_ERR "FCP EQ[Idx:%d|Qid:%d]\n", eq_idx, qid);
+               lpfc_debug_dump_q(phba->sli4_hba.fp_eq[eq_idx]);
+               return;
+       }
+
+       if (phba->sli4_hba.sp_eq->queue_id == qid) {
+               printk(KERN_ERR "SP EQ[|Qid:%d]\n", qid);
+               lpfc_debug_dump_q(phba->sli4_hba.sp_eq);
+       }
+}
+
+void lpfc_debug_dump_all_queues(struct lpfc_hba *);
index 3407b39..d54ae19 100644 (file)
@@ -230,27 +230,43 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
 
        INIT_LIST_HEAD(&pbuflist->list);
 
-       icmd->un.elsreq64.bdl.addrHigh = putPaddrHigh(pbuflist->phys);
-       icmd->un.elsreq64.bdl.addrLow = putPaddrLow(pbuflist->phys);
-       icmd->un.elsreq64.bdl.bdeFlags = BUFF_TYPE_BLP_64;
-       icmd->un.elsreq64.remoteID = did;       /* DID */
        if (expectRsp) {
+               icmd->un.elsreq64.bdl.addrHigh = putPaddrHigh(pbuflist->phys);
+               icmd->un.elsreq64.bdl.addrLow = putPaddrLow(pbuflist->phys);
+               icmd->un.elsreq64.bdl.bdeFlags = BUFF_TYPE_BLP_64;
                icmd->un.elsreq64.bdl.bdeSize = (2 * sizeof(struct ulp_bde64));
+
+               icmd->un.elsreq64.remoteID = did;               /* DID */
                icmd->ulpCommand = CMD_ELS_REQUEST64_CR;
                icmd->ulpTimeout = phba->fc_ratov * 2;
        } else {
-               icmd->un.elsreq64.bdl.bdeSize = sizeof(struct ulp_bde64);
+               icmd->un.xseq64.bdl.addrHigh = putPaddrHigh(pbuflist->phys);
+               icmd->un.xseq64.bdl.addrLow = putPaddrLow(pbuflist->phys);
+               icmd->un.xseq64.bdl.bdeFlags = BUFF_TYPE_BLP_64;
+               icmd->un.xseq64.bdl.bdeSize = sizeof(struct ulp_bde64);
+               icmd->un.xseq64.xmit_els_remoteID = did;        /* DID */
                icmd->ulpCommand = CMD_XMIT_ELS_RSP64_CX;
        }
        icmd->ulpBdeCount = 1;
        icmd->ulpLe = 1;
        icmd->ulpClass = CLASS3;
 
-       if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
-               icmd->un.elsreq64.myID = vport->fc_myDID;
+       /*
+        * If we have NPIV enabled, we want to send ELS traffic by VPI.
+        * For SLI4, since the driver controls VPIs we also want to include
+        * all ELS pt2pt protocol traffic as well.
+        */
+       if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) ||
+               ((phba->sli_rev == LPFC_SLI_REV4) &&
+                   (vport->fc_flag & FC_PT2PT))) {
+
+               if (expectRsp) {
+                       icmd->un.elsreq64.myID = vport->fc_myDID;
+
+                       /* For ELS_REQUEST64_CR, use the VPI by default */
+                       icmd->ulpContext = phba->vpi_ids[vport->vpi];
+               }
 
-               /* For ELS_REQUEST64_CR, use the VPI by default */
-               icmd->ulpContext = phba->vpi_ids[vport->vpi];
                icmd->ulpCt_h = 0;
                /* The CT field must be 0=INVALID_RPI for the ECHO cmd */
                if (elscmd == ELS_CMD_ECHO)
@@ -438,9 +454,10 @@ lpfc_issue_reg_vfi(struct lpfc_vport *vport)
        int rc = 0;
 
        sp = &phba->fc_fabparam;
-       /* move forward in case of SLI4 FC port loopback test */
+       /* move forward in case of SLI4 FC port loopback test and pt2pt mode */
        if ((phba->sli_rev == LPFC_SLI_REV4) &&
-           !(phba->link_flag & LS_LOOPBACK_MODE)) {
+           !(phba->link_flag & LS_LOOPBACK_MODE) &&
+           !(vport->fc_flag & FC_PT2PT)) {
                ndlp = lpfc_findnode_did(vport, Fabric_DID);
                if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) {
                        rc = -ENODEV;
@@ -707,14 +724,17 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
                        lpfc_sli4_unreg_all_rpis(vport);
                        lpfc_mbx_unreg_vpi(vport);
                        spin_lock_irq(shost->host_lock);
-                       vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
-                       /*
-                       * If VPI is unreged, driver need to do INIT_VPI
-                       * before re-registering
-                       */
                        vport->fc_flag |= FC_VPORT_NEEDS_INIT_VPI;
                        spin_unlock_irq(shost->host_lock);
                }
+
+               /*
+                * For SLI3 and SLI4, the VPI needs to be reregistered in
+                * response to this fabric parameter change event.
+                */
+               spin_lock_irq(shost->host_lock);
+               vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+               spin_unlock_irq(shost->host_lock);
        } else if ((phba->sli_rev == LPFC_SLI_REV4) &&
                !(vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)) {
                        /*
@@ -817,6 +837,17 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
                        mempool_free(mbox, phba->mbox_mem_pool);
                        goto fail;
                }
+
+               /*
+                * For SLI4, the VFI/VPI are registered AFTER the
+                * Nport with the higher WWPN sends the PLOGI with
+                * an assigned NPortId.
+                */
+
+               /* not equal */
+               if ((phba->sli_rev == LPFC_SLI_REV4) && rc)
+                       lpfc_issue_reg_vfi(vport);
+
                /* Decrement ndlp reference count indicating that ndlp can be
                 * safely released when other references to it are done.
                 */
@@ -2972,7 +3003,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
                         * ABTS we cannot generate and RRQ.
                         */
                        lpfc_set_rrq_active(phba, ndlp,
-                                        cmdiocb->sli4_xritag, 0, 0);
+                                        cmdiocb->sli4_lxritag, 0, 0);
                }
                break;
        case IOSTAT_LOCAL_REJECT:
@@ -3803,10 +3834,11 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
        /* Xmit ELS ACC response tag <ulpIoTag> */
        lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
                         "0128 Xmit ELS ACC response tag x%x, XRI: x%x, "
-                        "DID: x%x, nlp_flag: x%x nlp_state: x%x RPI: x%x\n",
+                        "DID: x%x, nlp_flag: x%x nlp_state: x%x RPI: x%x "
+                        "fc_flag x%x\n",
                         elsiocb->iotag, elsiocb->iocb.ulpContext,
                         ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
-                        ndlp->nlp_rpi);
+                        ndlp->nlp_rpi, vport->fc_flag);
        if (ndlp->nlp_flag & NLP_LOGO_ACC) {
                spin_lock_irq(shost->host_lock);
                ndlp->nlp_flag &= ~NLP_LOGO_ACC;
@@ -4936,8 +4968,6 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
                return 1;
        }
 
-       did = Fabric_DID;
-
        if ((lpfc_check_sparm(vport, ndlp, sp, CLASS3, 1))) {
                /* For a FLOGI we accept, then if our portname is greater
                 * then the remote portname we initiate Nport login.
@@ -4976,26 +5006,82 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
                        spin_lock_irq(shost->host_lock);
                        vport->fc_flag |= FC_PT2PT_PLOGI;
                        spin_unlock_irq(shost->host_lock);
+
+                       /* If we have the high WWPN we can assign our own
+                        * myDID; otherwise, we have to WAIT for a PLOGI
+                        * from the remote NPort to find out what it
+                        * will be.
+                        */
+                       vport->fc_myDID = PT2PT_LocalID;
                }
+
+               /*
+                * The vport state should go to LPFC_FLOGI only
+                * AFTER we issue a FLOGI, not receive one.
+                */
                spin_lock_irq(shost->host_lock);
                vport->fc_flag |= FC_PT2PT;
                vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
                spin_unlock_irq(shost->host_lock);
+
+               /*
+                * We temporarily set fc_myDID to make it look like we are
+                * a Fabric. This is done just so we end up with the right
+                * did / sid on the FLOGI ACC rsp.
+                */
+               did = vport->fc_myDID;
+               vport->fc_myDID = Fabric_DID;
+
        } else {
                /* Reject this request because invalid parameters */
                stat.un.b.lsRjtRsvd0 = 0;
                stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
                stat.un.b.lsRjtRsnCodeExp = LSEXP_SPARM_OPTIONS;
                stat.un.b.vendorUnique = 0;
+
+               /*
+                * We temporarily set fc_myDID to make it look like we are
+                * a Fabric. This is done just so we end up with the right
+                * did / sid on the FLOGI LS_RJT rsp.
+                */
+               did = vport->fc_myDID;
+               vport->fc_myDID = Fabric_DID;
+
                lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
                        NULL);
+
+               /* Now lets put fc_myDID back to what its supposed to be */
+               vport->fc_myDID = did;
+
                return 1;
        }
 
        /* Send back ACC */
        lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, NULL);
 
+       /* Now lets put fc_myDID back to what its supposed to be */
+       vport->fc_myDID = did;
+
+       if (!(vport->fc_flag & FC_PT2PT_PLOGI)) {
+
+               mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+               if (!mbox)
+                       goto fail;
+
+               lpfc_config_link(phba, mbox);
+
+               mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+               mbox->vport = vport;
+               rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+               if (rc == MBX_NOT_FINISHED) {
+                       mempool_free(mbox, phba->mbox_mem_pool);
+                       goto fail;
+               }
+       }
+
        return 0;
+fail:
+       return 1;
 }
 
 /**
@@ -5176,7 +5262,6 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
        }
 
        cmdsize = sizeof(struct RLS_RSP) + sizeof(uint32_t);
-       mempool_free(pmb, phba->mbox_mem_pool);
        elsiocb = lpfc_prep_els_iocb(phba->pport, 0, cmdsize,
                                     lpfc_max_els_tries, ndlp,
                                     ndlp->nlp_DID, ELS_CMD_ACC);
@@ -5184,8 +5269,10 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
        /* Decrement the ndlp reference count from previous mbox command */
        lpfc_nlp_put(ndlp);
 
-       if (!elsiocb)
+       if (!elsiocb) {
+               mempool_free(pmb, phba->mbox_mem_pool);
                return;
+       }
 
        icmd = &elsiocb->iocb;
        icmd->ulpContext = rxid;
@@ -5202,7 +5289,7 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
        rls_rsp->primSeqErrCnt = cpu_to_be32(mb->un.varRdLnk.primSeqErrCnt);
        rls_rsp->invalidXmitWord = cpu_to_be32(mb->un.varRdLnk.invalidXmitWord);
        rls_rsp->crcCnt = cpu_to_be32(mb->un.varRdLnk.crcCnt);
-
+       mempool_free(pmb, phba->mbox_mem_pool);
        /* Xmit ELS RLS ACC response tag <ulpIoTag> */
        lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_ELS,
                         "2874 Xmit ELS RLS ACC response tag x%x xri x%x, "
@@ -5586,7 +5673,7 @@ lpfc_issue_els_rrq(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
        pcmd += sizeof(uint32_t);
        els_rrq = (struct RRQ *) pcmd;
 
-       bf_set(rrq_oxid, els_rrq, rrq->xritag);
+       bf_set(rrq_oxid, els_rrq, phba->sli4_hba.xri_ids[rrq->xritag]);
        bf_set(rrq_rxid, els_rrq, rrq->rxid);
        bf_set(rrq_did, els_rrq, vport->fc_myDID);
        els_rrq->rrq = cpu_to_be32(els_rrq->rrq);
@@ -7873,7 +7960,9 @@ lpfc_sli4_els_xri_aborted(struct lpfc_hba *phba,
                        sglq_entry->state = SGL_FREED;
                        spin_unlock(&phba->sli4_hba.abts_sgl_list_lock);
                        spin_unlock_irqrestore(&phba->hbalock, iflag);
-                       lpfc_set_rrq_active(phba, ndlp, xri, rxid, 1);
+                       lpfc_set_rrq_active(phba, ndlp,
+                               sglq_entry->sli4_lxritag,
+                               rxid, 1);
 
                        /* Check if TXQ queue needs to be serviced */
                        if (pring->txq_cnt)
index b507536..5bb269e 100644 (file)
@@ -713,6 +713,7 @@ lpfc_do_work(void *p)
        int rc;
 
        set_user_nice(current, -20);
+       current->flags |= PF_NOFREEZE;
        phba->data_flags = 0;
 
        while (!kthread_should_stop()) {
@@ -1094,7 +1095,7 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
        /* Start discovery by sending a FLOGI. port_state is identically
         * LPFC_FLOGI while waiting for FLOGI cmpl
         */
-       if (vport->port_state != LPFC_FLOGI)
+       if (vport->port_state != LPFC_FLOGI || vport->fc_flag & FC_PT2PT_PLOGI)
                lpfc_initial_flogi(vport);
        return;
 
@@ -2881,9 +2882,14 @@ lpfc_mbx_cmpl_reg_vfi(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
        }
 
        if (vport->port_state == LPFC_FABRIC_CFG_LINK) {
-               /* For private loop just start discovery and we are done. */
-               if ((phba->fc_topology == LPFC_TOPOLOGY_LOOP) &&
-                   !(vport->fc_flag & FC_PUBLIC_LOOP)) {
+               /*
+                * For private loop or for NPort pt2pt,
+                * just start discovery and we are done.
+                */
+               if ((vport->fc_flag & FC_PT2PT) ||
+                   ((phba->fc_topology == LPFC_TOPOLOGY_LOOP) &&
+                   !(vport->fc_flag & FC_PUBLIC_LOOP))) {
+
                        /* Use loop map to make discovery list */
                        lpfc_disc_list_loopmap(vport);
                        /* Start discovery */
@@ -5490,9 +5496,9 @@ lpfc_nlp_release(struct kref *kref)
                ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type);
 
        lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE,
-                       "0279 lpfc_nlp_release: ndlp:x%p "
+                       "0279 lpfc_nlp_release: ndlp:x%p did %x "
                        "usgmap:x%x refcnt:%d\n",
-                       (void *)ndlp, ndlp->nlp_usg_map,
+                       (void *)ndlp, ndlp->nlp_DID, ndlp->nlp_usg_map,
                        atomic_read(&ndlp->kref.refcount));
 
        /* remove ndlp from action. */
index 5f280b5..41bb1d2 100644 (file)
@@ -3374,6 +3374,9 @@ typedef struct {
        WORD5 w5;               /* Header control/status word */
 } XMT_SEQ_FIELDS64;
 
+/* This word is remote ports D_ID for XMIT_ELS_RSP64 */
+#define xmit_els_remoteID xrsqRo
+
 /* IOCB Command template for 64 bit RCV_SEQUENCE64 */
 typedef struct {
        struct ulp_bde64 rcvBde;
index 91f0976..f1946df 100644 (file)
@@ -228,19 +228,15 @@ struct lpfc_sli4_flags {
 #define lpfc_idx_rsrc_rdy_MASK         0x00000001
 #define lpfc_idx_rsrc_rdy_WORD         word0
 #define LPFC_IDX_RSRC_RDY              1
-#define lpfc_xri_rsrc_rdy_SHIFT                1
-#define lpfc_xri_rsrc_rdy_MASK         0x00000001
-#define lpfc_xri_rsrc_rdy_WORD         word0
-#define LPFC_XRI_RSRC_RDY              1
-#define lpfc_rpi_rsrc_rdy_SHIFT                2
+#define lpfc_rpi_rsrc_rdy_SHIFT                1
 #define lpfc_rpi_rsrc_rdy_MASK         0x00000001
 #define lpfc_rpi_rsrc_rdy_WORD         word0
 #define LPFC_RPI_RSRC_RDY              1
-#define lpfc_vpi_rsrc_rdy_SHIFT                3
+#define lpfc_vpi_rsrc_rdy_SHIFT                2
 #define lpfc_vpi_rsrc_rdy_MASK         0x00000001
 #define lpfc_vpi_rsrc_rdy_WORD         word0
 #define LPFC_VPI_RSRC_RDY              1
-#define lpfc_vfi_rsrc_rdy_SHIFT                4
+#define lpfc_vfi_rsrc_rdy_SHIFT                3
 #define lpfc_vfi_rsrc_rdy_MASK         0x00000001
 #define lpfc_vfi_rsrc_rdy_WORD         word0
 #define LPFC_VFI_RSRC_RDY              1
@@ -3299,7 +3295,13 @@ struct els_request64_wqe {
 struct xmit_els_rsp64_wqe {
        struct ulp_bde64 bde;
        uint32_t response_payload_len;
-       uint32_t rsvd4;
+       uint32_t word4;
+#define els_rsp64_sid_SHIFT         0
+#define els_rsp64_sid_MASK          0x00FFFFFF
+#define els_rsp64_sid_WORD          word4
+#define els_rsp64_sp_SHIFT          24
+#define els_rsp64_sp_MASK           0x00000001
+#define els_rsp64_sp_WORD           word4
        struct wqe_did wqe_dest;
        struct wqe_common wqe_com; /* words 6-11 */
        uint32_t word12;
index 9598fdc..411ed48 100644 (file)
@@ -64,8 +64,8 @@ static int lpfc_sli4_queue_verify(struct lpfc_hba *);
 static int lpfc_create_bootstrap_mbox(struct lpfc_hba *);
 static int lpfc_setup_endian_order(struct lpfc_hba *);
 static void lpfc_destroy_bootstrap_mbox(struct lpfc_hba *);
-static void lpfc_free_sgl_list(struct lpfc_hba *);
-static int lpfc_init_sgl_list(struct lpfc_hba *);
+static void lpfc_free_els_sgl_list(struct lpfc_hba *);
+static void lpfc_init_sgl_list(struct lpfc_hba *);
 static int lpfc_init_active_sgl_array(struct lpfc_hba *);
 static void lpfc_free_active_sgl(struct lpfc_hba *);
 static int lpfc_hba_down_post_s3(struct lpfc_hba *phba);
@@ -2767,47 +2767,14 @@ lpfc_offline(struct lpfc_hba *phba)
 }
 
 /**
- * lpfc_scsi_buf_update - Update the scsi_buffers that are already allocated.
- * @phba: pointer to lpfc hba data structure.
- *
- * This routine goes through all the scsi buffers in the system and updates the
- * Physical XRIs assigned to the SCSI buffer because these may change after any
- * firmware reset
- *
- * Return codes
- *   0 - successful (for now, it always returns 0)
- **/
-int
-lpfc_scsi_buf_update(struct lpfc_hba *phba)
-{
-       struct lpfc_scsi_buf *sb, *sb_next;
-
-       spin_lock_irq(&phba->hbalock);
-       spin_lock(&phba->scsi_buf_list_lock);
-       list_for_each_entry_safe(sb, sb_next, &phba->lpfc_scsi_buf_list, list) {
-               sb->cur_iocbq.sli4_xritag =
-                       phba->sli4_hba.xri_ids[sb->cur_iocbq.sli4_lxritag];
-               set_bit(sb->cur_iocbq.sli4_lxritag, phba->sli4_hba.xri_bmask);
-               phba->sli4_hba.max_cfg_param.xri_used++;
-               phba->sli4_hba.xri_count++;
-       }
-       spin_unlock(&phba->scsi_buf_list_lock);
-       spin_unlock_irq(&phba->hbalock);
-       return 0;
-}
-
-/**
  * lpfc_scsi_free - Free all the SCSI buffers and IOCBs from driver lists
  * @phba: pointer to lpfc hba data structure.
  *
  * This routine is to free all the SCSI buffers and IOCBs from the driver
  * list back to kernel. It is called from lpfc_pci_remove_one to free
  * the internal resources before the device is removed from the system.
- *
- * Return codes
- *   0 - successful (for now, it always returns 0)
  **/
-static int
+static void
 lpfc_scsi_free(struct lpfc_hba *phba)
 {
        struct lpfc_scsi_buf *sb, *sb_next;
@@ -2833,7 +2800,178 @@ lpfc_scsi_free(struct lpfc_hba *phba)
        }
 
        spin_unlock_irq(&phba->hbalock);
+}
+
+/**
+ * lpfc_sli4_xri_sgl_update - update xri-sgl sizing and mapping
+ * @phba: pointer to lpfc hba data structure.
+ *
+ * This routine first calculates the sizes of the current els and allocated
+ * scsi sgl lists, and then goes through all sgls to updates the physical
+ * XRIs assigned due to port function reset. During port initialization, the
+ * current els and allocated scsi sgl lists are 0s.
+ *
+ * Return codes
+ *   0 - successful (for now, it always returns 0)
+ **/
+int
+lpfc_sli4_xri_sgl_update(struct lpfc_hba *phba)
+{
+       struct lpfc_sglq *sglq_entry = NULL, *sglq_entry_next = NULL;
+       struct lpfc_scsi_buf *psb = NULL, *psb_next = NULL;
+       uint16_t i, lxri, xri_cnt, els_xri_cnt, scsi_xri_cnt;
+       LIST_HEAD(els_sgl_list);
+       LIST_HEAD(scsi_sgl_list);
+       int rc;
+
+       /*
+        * update on pci function's els xri-sgl list
+        */
+       els_xri_cnt = lpfc_sli4_get_els_iocb_cnt(phba);
+       if (els_xri_cnt > phba->sli4_hba.els_xri_cnt) {
+               /* els xri-sgl expanded */
+               xri_cnt = els_xri_cnt - phba->sli4_hba.els_xri_cnt;
+               lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
+                               "3157 ELS xri-sgl count increased from "
+                               "%d to %d\n", phba->sli4_hba.els_xri_cnt,
+                               els_xri_cnt);
+               /* allocate the additional els sgls */
+               for (i = 0; i < xri_cnt; i++) {
+                       sglq_entry = kzalloc(sizeof(struct lpfc_sglq),
+                                            GFP_KERNEL);
+                       if (sglq_entry == NULL) {
+                               lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+                                               "2562 Failure to allocate an "
+                                               "ELS sgl entry:%d\n", i);
+                               rc = -ENOMEM;
+                               goto out_free_mem;
+                       }
+                       sglq_entry->buff_type = GEN_BUFF_TYPE;
+                       sglq_entry->virt = lpfc_mbuf_alloc(phba, 0,
+                                                          &sglq_entry->phys);
+                       if (sglq_entry->virt == NULL) {
+                               kfree(sglq_entry);
+                               lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+                                               "2563 Failure to allocate an "
+                                               "ELS mbuf:%d\n", i);
+                               rc = -ENOMEM;
+                               goto out_free_mem;
+                       }
+                       sglq_entry->sgl = sglq_entry->virt;
+                       memset(sglq_entry->sgl, 0, LPFC_BPL_SIZE);
+                       sglq_entry->state = SGL_FREED;
+                       list_add_tail(&sglq_entry->list, &els_sgl_list);
+               }
+               spin_lock(&phba->hbalock);
+               list_splice_init(&els_sgl_list, &phba->sli4_hba.lpfc_sgl_list);
+               spin_unlock(&phba->hbalock);
+       } else if (els_xri_cnt < phba->sli4_hba.els_xri_cnt) {
+               /* els xri-sgl shrinked */
+               xri_cnt = phba->sli4_hba.els_xri_cnt - els_xri_cnt;
+               lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
+                               "3158 ELS xri-sgl count decreased from "
+                               "%d to %d\n", phba->sli4_hba.els_xri_cnt,
+                               els_xri_cnt);
+               spin_lock_irq(&phba->hbalock);
+               list_splice_init(&phba->sli4_hba.lpfc_sgl_list, &els_sgl_list);
+               spin_unlock_irq(&phba->hbalock);
+               /* release extra els sgls from list */
+               for (i = 0; i < xri_cnt; i++) {
+                       list_remove_head(&els_sgl_list,
+                                        sglq_entry, struct lpfc_sglq, list);
+                       if (sglq_entry) {
+                               lpfc_mbuf_free(phba, sglq_entry->virt,
+                                              sglq_entry->phys);
+                               kfree(sglq_entry);
+                       }
+               }
+               spin_lock_irq(&phba->hbalock);
+               list_splice_init(&els_sgl_list, &phba->sli4_hba.lpfc_sgl_list);
+               spin_unlock_irq(&phba->hbalock);
+       } else
+               lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
+                               "3163 ELS xri-sgl count unchanged: %d\n",
+                               els_xri_cnt);
+       phba->sli4_hba.els_xri_cnt = els_xri_cnt;
+
+       /* update xris to els sgls on the list */
+       sglq_entry = NULL;
+       sglq_entry_next = NULL;
+       list_for_each_entry_safe(sglq_entry, sglq_entry_next,
+                                &phba->sli4_hba.lpfc_sgl_list, list) {
+               lxri = lpfc_sli4_next_xritag(phba);
+               if (lxri == NO_XRI) {
+                       lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+                                       "2400 Failed to allocate xri for "
+                                       "ELS sgl\n");
+                       rc = -ENOMEM;
+                       goto out_free_mem;
+               }
+               sglq_entry->sli4_lxritag = lxri;
+               sglq_entry->sli4_xritag = phba->sli4_hba.xri_ids[lxri];
+       }
+
+       /*
+        * update on pci function's allocated scsi xri-sgl list
+        */
+       phba->total_scsi_bufs = 0;
+
+       /* maximum number of xris available for scsi buffers */
+       phba->sli4_hba.scsi_xri_max = phba->sli4_hba.max_cfg_param.max_xri -
+                                     els_xri_cnt;
+
+       lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
+                       "2401 Current allocated SCSI xri-sgl count:%d, "
+                       "maximum  SCSI xri count:%d\n",
+                       phba->sli4_hba.scsi_xri_cnt,
+                       phba->sli4_hba.scsi_xri_max);
+
+       spin_lock_irq(&phba->scsi_buf_list_lock);
+       list_splice_init(&phba->lpfc_scsi_buf_list, &scsi_sgl_list);
+       spin_unlock_irq(&phba->scsi_buf_list_lock);
+
+       if (phba->sli4_hba.scsi_xri_cnt > phba->sli4_hba.scsi_xri_max) {
+               /* max scsi xri shrinked below the allocated scsi buffers */
+               scsi_xri_cnt = phba->sli4_hba.scsi_xri_cnt -
+                                       phba->sli4_hba.scsi_xri_max;
+               /* release the extra allocated scsi buffers */
+               for (i = 0; i < scsi_xri_cnt; i++) {
+                       list_remove_head(&scsi_sgl_list, psb,
+                                        struct lpfc_scsi_buf, list);
+                       pci_pool_free(phba->lpfc_scsi_dma_buf_pool, psb->data,
+                                     psb->dma_handle);
+                       kfree(psb);
+               }
+               spin_lock_irq(&phba->scsi_buf_list_lock);
+               phba->sli4_hba.scsi_xri_cnt -= scsi_xri_cnt;
+               spin_unlock_irq(&phba->scsi_buf_list_lock);
+       }
+
+       /* update xris associated to remaining allocated scsi buffers */
+       psb = NULL;
+       psb_next = NULL;
+       list_for_each_entry_safe(psb, psb_next, &scsi_sgl_list, list) {
+               lxri = lpfc_sli4_next_xritag(phba);
+               if (lxri == NO_XRI) {
+                       lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+                                       "2560 Failed to allocate xri for "
+                                       "scsi buffer\n");
+                       rc = -ENOMEM;
+                       goto out_free_mem;
+               }
+               psb->cur_iocbq.sli4_lxritag = lxri;
+               psb->cur_iocbq.sli4_xritag = phba->sli4_hba.xri_ids[lxri];
+       }
+       spin_lock(&phba->scsi_buf_list_lock);
+       list_splice_init(&scsi_sgl_list, &phba->lpfc_scsi_buf_list);
+       spin_unlock(&phba->scsi_buf_list_lock);
+
        return 0;
+
+out_free_mem:
+       lpfc_free_els_sgl_list(phba);
+       lpfc_scsi_free(phba);
+       return rc;
 }
 
 /**
@@ -4636,18 +4774,15 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
        if (rc)
                goto out_free_bsmbx;
 
-       /* Initialize and populate the iocb list per host */
-       rc = lpfc_init_sgl_list(phba);
-       if (rc) {
-               lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-                               "1400 Failed to initialize sgl list.\n");
-               goto out_destroy_cq_event_pool;
-       }
+       /* Initialize sgl lists per host */
+       lpfc_init_sgl_list(phba);
+
+       /* Allocate and initialize active sgl array */
        rc = lpfc_init_active_sgl_array(phba);
        if (rc) {
                lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
                                "1430 Failed to initialize sgl list.\n");
-               goto out_free_sgl_list;
+               goto out_destroy_cq_event_pool;
        }
        rc = lpfc_sli4_init_rpi_hdrs(phba);
        if (rc) {
@@ -4722,8 +4857,6 @@ out_remove_rpi_hdrs:
        lpfc_sli4_remove_rpi_hdrs(phba);
 out_free_active_sgl:
        lpfc_free_active_sgl(phba);
-out_free_sgl_list:
-       lpfc_free_sgl_list(phba);
 out_destroy_cq_event_pool:
        lpfc_sli4_cq_event_pool_destroy(phba);
 out_free_bsmbx:
@@ -4760,10 +4893,7 @@ lpfc_sli4_driver_resource_unset(struct lpfc_hba *phba)
 
        /* Free the ELS sgl list */
        lpfc_free_active_sgl(phba);
-       lpfc_free_sgl_list(phba);
-
-       /* Free the SCSI sgl management array */
-       kfree(phba->sli4_hba.lpfc_scsi_psb_array);
+       lpfc_free_els_sgl_list(phba);
 
        /* Free the completion queue EQ event pool */
        lpfc_sli4_cq_event_release_all(phba);
@@ -4990,29 +5120,42 @@ out_free_iocbq:
 }
 
 /**
- * lpfc_free_sgl_list - Free sgl list.
+ * lpfc_free_sgl_list - Free a given sgl list.
  * @phba: pointer to lpfc hba data structure.
+ * @sglq_list: pointer to the head of sgl list.
  *
- * This routine is invoked to free the driver's sgl list and memory.
+ * This routine is invoked to free a give sgl list and memory.
  **/
-static void
-lpfc_free_sgl_list(struct lpfc_hba *phba)
+void
+lpfc_free_sgl_list(struct lpfc_hba *phba, struct list_head *sglq_list)
 {
        struct lpfc_sglq *sglq_entry = NULL, *sglq_next = NULL;
+
+       list_for_each_entry_safe(sglq_entry, sglq_next, sglq_list, list) {
+               list_del(&sglq_entry->list);
+               lpfc_mbuf_free(phba, sglq_entry->virt, sglq_entry->phys);
+               kfree(sglq_entry);
+       }
+}
+
+/**
+ * lpfc_free_els_sgl_list - Free els sgl list.
+ * @phba: pointer to lpfc hba data structure.
+ *
+ * This routine is invoked to free the driver's els sgl list and memory.
+ **/
+static void
+lpfc_free_els_sgl_list(struct lpfc_hba *phba)
+{
        LIST_HEAD(sglq_list);
 
+       /* Retrieve all els sgls from driver list */
        spin_lock_irq(&phba->hbalock);
        list_splice_init(&phba->sli4_hba.lpfc_sgl_list, &sglq_list);
        spin_unlock_irq(&phba->hbalock);
 
-       list_for_each_entry_safe(sglq_entry, sglq_next,
-                                &sglq_list, list) {
-               list_del(&sglq_entry->list);
-               lpfc_mbuf_free(phba, sglq_entry->virt, sglq_entry->phys);
-               kfree(sglq_entry);
-               phba->sli4_hba.total_sglq_bufs--;
-       }
-       kfree(phba->sli4_hba.lpfc_els_sgl_array);
+       /* Now free the sgl list */
+       lpfc_free_sgl_list(phba, &sglq_list);
 }
 
 /**
@@ -5057,99 +5200,19 @@ lpfc_free_active_sgl(struct lpfc_hba *phba)
  * This routine is invoked to allocate and initizlize the driver's sgl
  * list and set up the sgl xritag tag array accordingly.
  *
- * Return codes
- *     0 - successful
- *     other values - error
  **/
-static int
+static void
 lpfc_init_sgl_list(struct lpfc_hba *phba)
 {
-       struct lpfc_sglq *sglq_entry = NULL;
-       int i;
-       int els_xri_cnt;
-
-       els_xri_cnt = lpfc_sli4_get_els_iocb_cnt(phba);
-       lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
-                               "2400 ELS XRI count %d.\n",
-                               els_xri_cnt);
        /* Initialize and populate the sglq list per host/VF. */
        INIT_LIST_HEAD(&phba->sli4_hba.lpfc_sgl_list);
        INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_els_sgl_list);
 
-       /* Sanity check on XRI management */
-       if (phba->sli4_hba.max_cfg_param.max_xri <= els_xri_cnt) {
-               lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-                               "2562 No room left for SCSI XRI allocation: "
-                               "max_xri=%d, els_xri=%d\n",
-                               phba->sli4_hba.max_cfg_param.max_xri,
-                               els_xri_cnt);
-               return -ENOMEM;
-       }
-
-       /* Allocate memory for the ELS XRI management array */
-       phba->sli4_hba.lpfc_els_sgl_array =
-                       kzalloc((sizeof(struct lpfc_sglq *) * els_xri_cnt),
-                       GFP_KERNEL);
-
-       if (!phba->sli4_hba.lpfc_els_sgl_array) {
-               lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-                               "2401 Failed to allocate memory for ELS "
-                               "XRI management array of size %d.\n",
-                               els_xri_cnt);
-               return -ENOMEM;
-       }
+       /* els xri-sgl book keeping */
+       phba->sli4_hba.els_xri_cnt = 0;
 
-       /* Keep the SCSI XRI into the XRI management array */
-       phba->sli4_hba.scsi_xri_max =
-                       phba->sli4_hba.max_cfg_param.max_xri - els_xri_cnt;
+       /* scsi xri-buffer book keeping */
        phba->sli4_hba.scsi_xri_cnt = 0;
-       phba->sli4_hba.lpfc_scsi_psb_array =
-                       kzalloc((sizeof(struct lpfc_scsi_buf *) *
-                       phba->sli4_hba.scsi_xri_max), GFP_KERNEL);
-
-       if (!phba->sli4_hba.lpfc_scsi_psb_array) {
-               lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-                               "2563 Failed to allocate memory for SCSI "
-                               "XRI management array of size %d.\n",
-                               phba->sli4_hba.scsi_xri_max);
-               kfree(phba->sli4_hba.lpfc_els_sgl_array);
-               return -ENOMEM;
-       }
-
-       for (i = 0; i < els_xri_cnt; i++) {
-               sglq_entry = kzalloc(sizeof(struct lpfc_sglq), GFP_KERNEL);
-               if (sglq_entry == NULL) {
-                       printk(KERN_ERR "%s: only allocated %d sgls of "
-                               "expected %d count. Unloading driver.\n",
-                               __func__, i, els_xri_cnt);
-                       goto out_free_mem;
-               }
-
-               sglq_entry->buff_type = GEN_BUFF_TYPE;
-               sglq_entry->virt = lpfc_mbuf_alloc(phba, 0, &sglq_entry->phys);
-               if (sglq_entry->virt == NULL) {
-                       kfree(sglq_entry);
-                       printk(KERN_ERR "%s: failed to allocate mbuf.\n"
-                               "Unloading driver.\n", __func__);
-                       goto out_free_mem;
-               }
-               sglq_entry->sgl = sglq_entry->virt;
-               memset(sglq_entry->sgl, 0, LPFC_BPL_SIZE);
-
-               /* The list order is used by later block SGL registraton */
-               spin_lock_irq(&phba->hbalock);
-               sglq_entry->state = SGL_FREED;
-               list_add_tail(&sglq_entry->list, &phba->sli4_hba.lpfc_sgl_list);
-               phba->sli4_hba.lpfc_els_sgl_array[i] = sglq_entry;
-               phba->sli4_hba.total_sglq_bufs++;
-               spin_unlock_irq(&phba->hbalock);
-       }
-       return 0;
-
-out_free_mem:
-       kfree(phba->sli4_hba.lpfc_scsi_psb_array);
-       lpfc_free_sgl_list(phba);
-       return -ENOMEM;
 }
 
 /**
@@ -7320,9 +7383,11 @@ lpfc_pci_function_reset(struct lpfc_hba *phba)
                                        phba->sli4_hba.u.if_type2.ERR2regaddr);
                                lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
                                        "2890 Port error detected during port "
-                                       "reset(%d): port status reg 0x%x, "
+                                       "reset(%d): wait_tmo:%d ms, "
+                                       "port status reg 0x%x, "
                                        "error 1=0x%x, error 2=0x%x\n",
-                                       num_resets, reg_data.word0,
+                                       num_resets, rdy_chk*10,
+                                       reg_data.word0,
                                        phba->work_status[0],
                                        phba->work_status[1]);
                                rc = -ENODEV;
@@ -8694,8 +8759,11 @@ lpfc_pci_remove_one_s3(struct pci_dev *pdev)
        /* Release all the vports against this physical port */
        vports = lpfc_create_vport_work_array(phba);
        if (vports != NULL)
-               for (i = 1; i <= phba->max_vports && vports[i] != NULL; i++)
+               for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
+                       if (vports[i]->port_type == LPFC_PHYSICAL_PORT)
+                               continue;
                        fc_vport_terminate(vports[i]->fc_vport);
+               }
        lpfc_destroy_vport_work_array(phba, vports);
 
        /* Remove FC host and then SCSI host with the physical port */
@@ -9115,8 +9183,12 @@ lpfc_sli4_get_els_iocb_cnt(struct lpfc_hba *phba)
                        return 50;
                else if (max_xri <= 1024)
                        return 100;
-               else
+               else if (max_xri <= 1536)
                        return 150;
+               else if (max_xri <= 2048)
+                       return 200;
+               else
+                       return 250;
        } else
                return 0;
 }
@@ -9455,8 +9527,11 @@ lpfc_pci_remove_one_s4(struct pci_dev *pdev)
        /* Release all the vports against this physical port */
        vports = lpfc_create_vport_work_array(phba);
        if (vports != NULL)
-               for (i = 1; i <= phba->max_vports && vports[i] != NULL; i++)
+               for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
+                       if (vports[i]->port_type == LPFC_PHYSICAL_PORT)
+                               continue;
                        fc_vport_terminate(vports[i]->fc_vport);
+               }
        lpfc_destroy_vport_work_array(phba, vports);
 
        /* Remove FC host and then SCSI host with the physical port */
index 15ca2a9..9133a97 100644 (file)
@@ -367,8 +367,10 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
                return 1;
        }
 
+       /* Check for Nport to NPort pt2pt protocol */
        if ((vport->fc_flag & FC_PT2PT) &&
            !(vport->fc_flag & FC_PT2PT_PLOGI)) {
+
                /* rcv'ed PLOGI decides what our NPortId will be */
                vport->fc_myDID = icmd->un.rcvels.parmRo;
                mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
@@ -382,6 +384,13 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
                        mempool_free(mbox, phba->mbox_mem_pool);
                        goto out;
                }
+               /*
+                * For SLI4, the VFI/VPI are registered AFTER the
+                * Nport with the higher WWPN sends us a PLOGI with
+                * our assigned NPortId.
+                */
+               if (phba->sli_rev == LPFC_SLI_REV4)
+                       lpfc_issue_reg_vfi(vport);
 
                lpfc_can_disctmo(vport);
        }
index 88f3a83..66e0906 100644 (file)
@@ -399,6 +399,14 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
        num_rsrc_err = atomic_read(&phba->num_rsrc_err);
        num_cmd_success = atomic_read(&phba->num_cmd_success);
 
+       /*
+        * The error and success command counters are global per
+        * driver instance.  If another handler has already
+        * operated on this error event, just exit.
+        */
+       if (num_rsrc_err == 0)
+               return;
+
        vports = lpfc_create_vport_work_array(phba);
        if (vports != NULL)
                for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
@@ -688,7 +696,8 @@ lpfc_sli4_fcp_xri_aborted(struct lpfc_hba *phba,
                        rrq_empty = list_empty(&phba->active_rrq_list);
                        spin_unlock_irqrestore(&phba->hbalock, iflag);
                        if (ndlp) {
-                               lpfc_set_rrq_active(phba, ndlp, xri, rxid, 1);
+                               lpfc_set_rrq_active(phba, ndlp,
+                                       psb->cur_iocbq.sli4_lxritag, rxid, 1);
                                lpfc_sli4_abts_err_handler(phba, ndlp, axri);
                        }
                        lpfc_release_scsi_buf_s4(phba, psb);
@@ -718,72 +727,162 @@ lpfc_sli4_fcp_xri_aborted(struct lpfc_hba *phba,
 }
 
 /**
- * lpfc_sli4_repost_scsi_sgl_list - Repsot the Scsi buffers sgl pages as block
+ * lpfc_sli4_post_scsi_sgl_list - Psot blocks of scsi buffer sgls from a list
  * @phba: pointer to lpfc hba data structure.
+ * @post_sblist: pointer to the scsi buffer list.
  *
- * This routine walks the list of scsi buffers that have been allocated and
- * repost them to the HBA by using SGL block post. This is needed after a
- * pci_function_reset/warm_start or start. The lpfc_hba_down_post_s4 routine
- * is responsible for moving all scsi buffers on the lpfc_abts_scsi_sgl_list
- * to the lpfc_scsi_buf_list. If the repost fails, reject all scsi buffers.
+ * This routine walks a list of scsi buffers that was passed in. It attempts
+ * to construct blocks of scsi buffer sgls which contains contiguous xris and
+ * uses the non-embedded SGL block post mailbox commands to post to the port.
+ * For single SCSI buffer sgl with non-contiguous xri, if any, it shall use
+ * embedded SGL post mailbox command for posting. The @post_sblist passed in
+ * must be local list, thus no lock is needed when manipulate the list.
  *
- * Returns: 0 = success, non-zero failure.
+ * Returns: 0 = failure, non-zero number of successfully posted buffers.
  **/
 int
-lpfc_sli4_repost_scsi_sgl_list(struct lpfc_hba *phba)
+lpfc_sli4_post_scsi_sgl_list(struct lpfc_hba *phba,
+                            struct list_head *post_sblist, int sb_count)
 {
-       struct lpfc_scsi_buf *psb;
-       int index, status, bcnt = 0, rcnt = 0, rc = 0;
-       LIST_HEAD(sblist);
-
-       for (index = 0; index < phba->sli4_hba.scsi_xri_cnt; index++) {
-               psb = phba->sli4_hba.lpfc_scsi_psb_array[index];
-               if (psb) {
-                       /* Remove from SCSI buffer list */
-                       list_del(&psb->list);
-                       /* Add it to a local SCSI buffer list */
-                       list_add_tail(&psb->list, &sblist);
-                       if (++rcnt == LPFC_NEMBED_MBOX_SGL_CNT) {
-                               bcnt = rcnt;
-                               rcnt = 0;
+       struct lpfc_scsi_buf *psb, *psb_next;
+       int status;
+       int post_cnt = 0, block_cnt = 0, num_posting = 0, num_posted = 0;
+       dma_addr_t pdma_phys_bpl1;
+       int last_xritag = NO_XRI;
+       LIST_HEAD(prep_sblist);
+       LIST_HEAD(blck_sblist);
+       LIST_HEAD(scsi_sblist);
+
+       /* sanity check */
+       if (sb_count <= 0)
+               return -EINVAL;
+
+       list_for_each_entry_safe(psb, psb_next, post_sblist, list) {
+               list_del_init(&psb->list);
+               block_cnt++;
+               if ((last_xritag != NO_XRI) &&
+                   (psb->cur_iocbq.sli4_xritag != last_xritag + 1)) {
+                       /* a hole in xri block, form a sgl posting block */
+                       list_splice_init(&prep_sblist, &blck_sblist);
+                       post_cnt = block_cnt - 1;
+                       /* prepare list for next posting block */
+                       list_add_tail(&psb->list, &prep_sblist);
+                       block_cnt = 1;
+               } else {
+                       /* prepare list for next posting block */
+                       list_add_tail(&psb->list, &prep_sblist);
+                       /* enough sgls for non-embed sgl mbox command */
+                       if (block_cnt == LPFC_NEMBED_MBOX_SGL_CNT) {
+                               list_splice_init(&prep_sblist, &blck_sblist);
+                               post_cnt = block_cnt;
+                               block_cnt = 0;
                        }
-               } else
-                       /* A hole present in the XRI array, need to skip */
-                       bcnt = rcnt;
+               }
+               num_posting++;
+               last_xritag = psb->cur_iocbq.sli4_xritag;
 
-               if (index == phba->sli4_hba.scsi_xri_cnt - 1)
-                       /* End of XRI array for SCSI buffer, complete */
-                       bcnt = rcnt;
+               /* end of repost sgl list condition for SCSI buffers */
+               if (num_posting == sb_count) {
+                       if (post_cnt == 0) {
+                               /* last sgl posting block */
+                               list_splice_init(&prep_sblist, &blck_sblist);
+                               post_cnt = block_cnt;
+                       } else if (block_cnt == 1) {
+                               /* last single sgl with non-contiguous xri */
+                               if (phba->cfg_sg_dma_buf_size > SGL_PAGE_SIZE)
+                                       pdma_phys_bpl1 = psb->dma_phys_bpl +
+                                                               SGL_PAGE_SIZE;
+                               else
+                                       pdma_phys_bpl1 = 0;
+                               status = lpfc_sli4_post_sgl(phba,
+                                               psb->dma_phys_bpl,
+                                               pdma_phys_bpl1,
+                                               psb->cur_iocbq.sli4_xritag);
+                               if (status) {
+                                       /* failure, put on abort scsi list */
+                                       psb->exch_busy = 1;
+                               } else {
+                                       /* success, put on SCSI buffer list */
+                                       psb->exch_busy = 0;
+                                       psb->status = IOSTAT_SUCCESS;
+                                       num_posted++;
+                               }
+                               /* success, put on SCSI buffer sgl list */
+                               list_add_tail(&psb->list, &scsi_sblist);
+                       }
+               }
 
-               /* Continue until collect up to a nembed page worth of sgls */
-               if (bcnt == 0)
+               /* continue until a nembed page worth of sgls */
+               if (post_cnt == 0)
                        continue;
-               /* Now, post the SCSI buffer list sgls as a block */
-               if (!phba->sli4_hba.extents_in_use)
-                       status = lpfc_sli4_post_scsi_sgl_block(phba,
-                                                       &sblist,
-                                                       bcnt);
-               else
-                       status = lpfc_sli4_post_scsi_sgl_blk_ext(phba,
-                                                       &sblist,
-                                                       bcnt);
-               /* Reset SCSI buffer count for next round of posting */
-               bcnt = 0;
-               while (!list_empty(&sblist)) {
-                       list_remove_head(&sblist, psb, struct lpfc_scsi_buf,
-                                        list);
+
+               /* post block of SCSI buffer list sgls */
+               status = lpfc_sli4_post_scsi_sgl_block(phba, &blck_sblist,
+                                                      post_cnt);
+
+               /* don't reset xirtag due to hole in xri block */
+               if (block_cnt == 0)
+                       last_xritag = NO_XRI;
+
+               /* reset SCSI buffer post count for next round of posting */
+               post_cnt = 0;
+
+               /* put posted SCSI buffer-sgl posted on SCSI buffer sgl list */
+               while (!list_empty(&blck_sblist)) {
+                       list_remove_head(&blck_sblist, psb,
+                                        struct lpfc_scsi_buf, list);
                        if (status) {
-                               /* Put this back on the abort scsi list */
+                               /* failure, put on abort scsi list */
                                psb->exch_busy = 1;
-                               rc++;
                        } else {
+                               /* success, put on SCSI buffer list */
                                psb->exch_busy = 0;
                                psb->status = IOSTAT_SUCCESS;
+                               num_posted++;
                        }
-                       /* Put it back into the SCSI buffer list */
-                       lpfc_release_scsi_buf_s4(phba, psb);
+                       list_add_tail(&psb->list, &scsi_sblist);
                }
        }
+       /* Push SCSI buffers with sgl posted to the availble list */
+       while (!list_empty(&scsi_sblist)) {
+               list_remove_head(&scsi_sblist, psb,
+                                struct lpfc_scsi_buf, list);
+               lpfc_release_scsi_buf_s4(phba, psb);
+       }
+       return num_posted;
+}
+
+/**
+ * lpfc_sli4_repost_scsi_sgl_list - Repsot all the allocated scsi buffer sgls
+ * @phba: pointer to lpfc hba data structure.
+ *
+ * This routine walks the list of scsi buffers that have been allocated and
+ * repost them to the port by using SGL block post. This is needed after a
+ * pci_function_reset/warm_start or start. The lpfc_hba_down_post_s4 routine
+ * is responsible for moving all scsi buffers on the lpfc_abts_scsi_sgl_list
+ * to the lpfc_scsi_buf_list. If the repost fails, reject all scsi buffers.
+ *
+ * Returns: 0 = success, non-zero failure.
+ **/
+int
+lpfc_sli4_repost_scsi_sgl_list(struct lpfc_hba *phba)
+{
+       LIST_HEAD(post_sblist);
+       int num_posted, rc = 0;
+
+       /* get all SCSI buffers need to repost to a local list */
+       spin_lock(&phba->scsi_buf_list_lock);
+       list_splice_init(&phba->lpfc_scsi_buf_list, &post_sblist);
+       spin_unlock(&phba->scsi_buf_list_lock);
+
+       /* post the list of scsi buffer sgls to port if available */
+       if (!list_empty(&post_sblist)) {
+               num_posted = lpfc_sli4_post_scsi_sgl_list(phba, &post_sblist,
+                                               phba->sli4_hba.scsi_xri_cnt);
+               /* failed to post any scsi buffer, return error */
+               if (num_posted == 0)
+                       rc = -EIO;
+       }
        return rc;
 }
 
@@ -792,12 +891,13 @@ lpfc_sli4_repost_scsi_sgl_list(struct lpfc_hba *phba)
  * @vport: The virtual port for which this call being executed.
  * @num_to_allocate: The requested number of buffers to allocate.
  *
- * This routine allocates a scsi buffer for device with SLI-4 interface spec,
+ * This routine allocates scsi buffers for device with SLI-4 interface spec,
  * the scsi buffer contains all the necessary information needed to initiate
- * a SCSI I/O.
+ * a SCSI I/O. After allocating up to @num_to_allocate SCSI buffers and put
+ * them on a list, it post them to the port by using SGL block post.
  *
  * Return codes:
- *   int - number of scsi buffers that were allocated.
+ *   int - number of scsi buffers that were allocated and posted.
  *   0 = failure, less than num_to_alloc is a partial failure.
  **/
 static int
@@ -810,22 +910,21 @@ lpfc_new_scsi_buf_s4(struct lpfc_vport *vport, int num_to_alloc)
        dma_addr_t pdma_phys_fcp_cmd;
        dma_addr_t pdma_phys_fcp_rsp;
        dma_addr_t pdma_phys_bpl, pdma_phys_bpl1;
-       uint16_t iotag, last_xritag = NO_XRI, lxri = 0;
-       int status = 0, index;
-       int bcnt;
-       int non_sequential_xri = 0;
-       LIST_HEAD(sblist);
+       uint16_t iotag, lxri = 0;
+       int bcnt, num_posted;
+       LIST_HEAD(prep_sblist);
+       LIST_HEAD(post_sblist);
+       LIST_HEAD(scsi_sblist);
 
        for (bcnt = 0; bcnt < num_to_alloc; bcnt++) {
                psb = kzalloc(sizeof(struct lpfc_scsi_buf), GFP_KERNEL);
                if (!psb)
                        break;
-
                /*
-                * Get memory from the pci pool to map the virt space to pci bus
-                * space for an I/O.  The DMA buffer includes space for the
-                * struct fcp_cmnd, struct fcp_rsp and the number of bde's
-                * necessary to support the sg_tablesize.
+                * Get memory from the pci pool to map the virt space to
+                * pci bus space for an I/O. The DMA buffer includes space
+                * for the struct fcp_cmnd, struct fcp_rsp and the number
+                * of bde's necessary to support the sg_tablesize.
                 */
                psb->data = pci_pool_alloc(phba->lpfc_scsi_dma_buf_pool,
                                                GFP_KERNEL, &psb->dma_handle);
@@ -833,8 +932,6 @@ lpfc_new_scsi_buf_s4(struct lpfc_vport *vport, int num_to_alloc)
                        kfree(psb);
                        break;
                }
-
-               /* Initialize virtual ptrs to dma_buf region. */
                memset(psb->data, 0, phba->cfg_sg_dma_buf_size);
 
                /* Allocate iotag for psb->cur_iocbq. */
@@ -855,16 +952,7 @@ lpfc_new_scsi_buf_s4(struct lpfc_vport *vport, int num_to_alloc)
                }
                psb->cur_iocbq.sli4_lxritag = lxri;
                psb->cur_iocbq.sli4_xritag = phba->sli4_hba.xri_ids[lxri];
-               if (last_xritag != NO_XRI
-                       && psb->cur_iocbq.sli4_xritag != (last_xritag+1)) {
-                       non_sequential_xri = 1;
-               } else
-                       list_add_tail(&psb->list, &sblist);
-               last_xritag = psb->cur_iocbq.sli4_xritag;
-
-               index = phba->sli4_hba.scsi_xri_cnt++;
                psb->cur_iocbq.iocb_flag |= LPFC_IO_FCP;
-
                psb->fcp_bpl = psb->data;
                psb->fcp_cmnd = (psb->data + phba->cfg_sg_dma_buf_size)
                        - (sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp));
@@ -880,9 +968,9 @@ lpfc_new_scsi_buf_s4(struct lpfc_vport *vport, int num_to_alloc)
                pdma_phys_fcp_rsp = pdma_phys_fcp_cmd + sizeof(struct fcp_cmnd);
 
                /*
-                * The first two bdes are the FCP_CMD and FCP_RSP.  The balance
-                * are sg list bdes.  Initialize the first two and leave the
-                * rest for queuecommand.
+                * The first two bdes are the FCP_CMD and FCP_RSP.
+                * The balance are sg list bdes. Initialize the
+                * first two and leave the rest for queuecommand.
                 */
                sgl->addr_hi = cpu_to_le32(putPaddrHigh(pdma_phys_fcp_cmd));
                sgl->addr_lo = cpu_to_le32(putPaddrLow(pdma_phys_fcp_cmd));
@@ -917,62 +1005,31 @@ lpfc_new_scsi_buf_s4(struct lpfc_vport *vport, int num_to_alloc)
                iocb->ulpBdeCount = 1;
                iocb->ulpLe = 1;
                iocb->ulpClass = CLASS3;
-               psb->cur_iocbq.context1  = psb;
+               psb->cur_iocbq.context1 = psb;
                if (phba->cfg_sg_dma_buf_size > SGL_PAGE_SIZE)
                        pdma_phys_bpl1 = pdma_phys_bpl + SGL_PAGE_SIZE;
                else
                        pdma_phys_bpl1 = 0;
                psb->dma_phys_bpl = pdma_phys_bpl;
-               phba->sli4_hba.lpfc_scsi_psb_array[index] = psb;
-               if (non_sequential_xri) {
-                       status = lpfc_sli4_post_sgl(phba, pdma_phys_bpl,
-                                               pdma_phys_bpl1,
-                                               psb->cur_iocbq.sli4_xritag);
-                       if (status) {
-                               /* Put this back on the abort scsi list */
-                               psb->exch_busy = 1;
-                       } else {
-                               psb->exch_busy = 0;
-                               psb->status = IOSTAT_SUCCESS;
-                       }
-                       /* Put it back into the SCSI buffer list */
-                       lpfc_release_scsi_buf_s4(phba, psb);
-                       break;
-               }
-       }
-       if (bcnt) {
-               if (!phba->sli4_hba.extents_in_use)
-                       status = lpfc_sli4_post_scsi_sgl_block(phba,
-                                                               &sblist,
-                                                               bcnt);
-               else
-                       status = lpfc_sli4_post_scsi_sgl_blk_ext(phba,
-                                                               &sblist,
-                                                               bcnt);
-
-               if (status) {
-                       lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
-                                       "3021 SCSI SGL post error %d\n",
-                                       status);
-                       bcnt = 0;
-               }
-               /* Reset SCSI buffer count for next round of posting */
-               while (!list_empty(&sblist)) {
-                       list_remove_head(&sblist, psb, struct lpfc_scsi_buf,
-                                list);
-                       if (status) {
-                               /* Put this back on the abort scsi list */
-                               psb->exch_busy = 1;
-                       } else {
-                               psb->exch_busy = 0;
-                               psb->status = IOSTAT_SUCCESS;
-                       }
-                       /* Put it back into the SCSI buffer list */
-                       lpfc_release_scsi_buf_s4(phba, psb);
-               }
+
+               /* add the scsi buffer to a post list */
+               list_add_tail(&psb->list, &post_sblist);
+               spin_lock_irq(&phba->scsi_buf_list_lock);
+               phba->sli4_hba.scsi_xri_cnt++;
+               spin_unlock_irq(&phba->scsi_buf_list_lock);
        }
+       lpfc_printf_log(phba, KERN_INFO, LOG_BG,
+                       "3021 Allocate %d out of %d requested new SCSI "
+                       "buffers\n", bcnt, num_to_alloc);
+
+       /* post the list of scsi buffer sgls to port if available */
+       if (!list_empty(&post_sblist))
+               num_posted = lpfc_sli4_post_scsi_sgl_list(phba,
+                                                         &post_sblist, bcnt);
+       else
+               num_posted = 0;
 
-       return bcnt + non_sequential_xri;
+       return num_posted;
 }
 
 /**
@@ -1043,7 +1100,7 @@ lpfc_get_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
        list_for_each_entry(lpfc_cmd, &phba->lpfc_scsi_buf_list,
                                                        list) {
                if (lpfc_test_rrq_active(phba, ndlp,
-                                        lpfc_cmd->cur_iocbq.sli4_xritag))
+                                        lpfc_cmd->cur_iocbq.sli4_lxritag))
                        continue;
                list_del(&lpfc_cmd->list);
                found = 1;
@@ -1897,7 +1954,9 @@ lpfc_bg_setup_bpl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
        dma_addr_t physaddr;
        int i = 0, num_bde = 0, status;
        int datadir = sc->sc_data_direction;
+#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
        uint32_t rc;
+#endif
        uint32_t checking = 1;
        uint32_t reftag;
        unsigned blksize;
@@ -2034,7 +2093,9 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
        int datadir = sc->sc_data_direction;
        unsigned char pgdone = 0, alldone = 0;
        unsigned blksize;
+#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
        uint32_t rc;
+#endif
        uint32_t checking = 1;
        uint32_t reftag;
        uint8_t txop, rxop;
@@ -2253,7 +2314,9 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
        uint32_t reftag;
        unsigned blksize;
        uint8_t txop, rxop;
+#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
        uint32_t rc;
+#endif
        uint32_t checking = 1;
        uint32_t dma_len;
        uint32_t dma_offset = 0;
@@ -2383,7 +2446,9 @@ lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
        uint32_t reftag;
        uint8_t txop, rxop;
        uint32_t dma_len;
+#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
        uint32_t rc;
+#endif
        uint32_t checking = 1;
        uint32_t dma_offset = 0;
        int num_sge = 0;
@@ -3604,11 +3669,16 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
                        logit = LOG_FCP | LOG_FCP_UNDER;
                lpfc_printf_vlog(vport, KERN_WARNING, logit,
                         "9030 FCP cmd x%x failed <%d/%d> "
-                        "status: x%x result: x%x Data: x%x x%x\n",
+                        "status: x%x result: x%x "
+