1 - major number
2 - minor mumber
3 - device name
- 4 - reads completed succesfully
+ 4 - reads completed successfully
5 - reads merged
6 - sectors read
7 - time spent reading (ms)
Description:
The /sys/block/<disk>/stat files displays the I/O
statistics of disk <disk>. They contain 11 fields:
- 1 - reads completed succesfully
+ 1 - reads completed successfully
2 - reads merged
3 - sectors read
4 - time spent reading (ms)
<sect1 id="Multiple_chip_control">
<title>Multiple chip control</title>
<para>
- The nand driver can control chip arrays. Therefor the
+ The nand driver can control chip arrays. Therefore the
board driver must provide an own select_chip function. This
function must (de)select the requested chip.
The function pointer in the nand_chip structure must
* you do, leave them untouched.
* Inluding less markers will make the
* resulting code smaller, but there will
- * be fewer aplications which can read it.
+ * be fewer applications which can read it.
* The presence of the APP and COM marker
* is influenced by APP_len and COM_len
* ONLY, not by this property! */
pages of the given size and map them onto the virtually contiguous
memory. The virtual pointer is addressed in runtime->dma_area.
The physical address (runtime->dma_addr) is set to zero,
- because the buffer is physically non-contigous.
+ because the buffer is physically non-contiguous.
The physical address table is set up in sgbuf->table.
You can get the physical address at a certain offset via
<function>snd_pcm_sgbuf_get_addr()</function>.
- moved transfer control (pid filter, fifo control) from usb driver to frontend, it seems
better settled there (added xfer_ops-struct)
- created a common files for frontends (mc/p/mb)
- 2004-09-28 - added support for a new device (Unkown, vendor ID is Hyper-Paltek)
+ 2004-09-28 - added support for a new device (Unknown, vendor ID is Hyper-Paltek)
2004-09-20 - added support for a new device (Compro DVB-U2000), thanks
to Amaury Demol for reporting
- changed usb TS transfer method (several urbs, stopping transfer
addr = mmap(NULL, getpagesize() * num,
PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, fd, 0);
if (addr == MAP_FAILED)
- err(1, "Mmaping %u pages of /dev/zero", num);
+ err(1, "Mmapping %u pages of /dev/zero", num);
/*
* One neat mmap feature is that you can close the fd, and it
Disks are exposed with WCE=1. User is advised to enable Write Back
mode only when the controller has battery backup. At this time
Synhronize cache is not supported by the FW. Driver will short-cycle
- the cmd and return sucess without sending down to FW.
+ the cmd and return success without sending down to FW.
1 Release Date : Sun Jan. 14 11:21:32 PDT 2007 -
Sumant Patro <Sumant.Patro@lsil.com>/Bo Yang
The bulk of the driver will be managing the I/O queue fed by transfer().
That queue could be purely conceptual. For example, a driver used only
-for low-frequency sensor acess might be fine using synchronous PIO.
+for low-frequency sensor access might be fine using synchronous PIO.
But the queue will probably be very real, using message->queue, PIO,
often DMA (especially if the root filesystem is in SPI flash), and
mmap_min_addr
This file indicates the amount of address space which a user process will
-be restricted from mmaping. Since kernel null dereference bugs could
+be restricted from mmapping. Since kernel null dereference bugs could
accidentally operate based on the information in the first couple of pages
of memory userspace processes should not be allowed to write to them. By
default this value is set to 0 and no protections will be enforced by the
xxxx vend:prod
----
-spca501 0000:0000 MystFromOri Unknow Camera
+spca501 0000:0000 MystFromOri Unknown Camera
m5602 0402:5602 ALi Video Camera Controller
spca501 040a:0002 Kodak DVC-325
spca500 040a:0300 Kodak EZ200
present = (flags >> i) & 1;
if (!page_flag_names[i]) {
if (present)
- fatal("unkown flag bit %d\n", i);
+ fatal("unknown flag bit %d\n", i);
continue;
}
buf[j++] = present ? page_flag_names[i][0] : '_';
}
if (bootmap_start == -1)
- panic("couldn't find a contigous place for the bootmap");
+ panic("couldn't find a contiguous place for the bootmap");
/* Allocate the bootmap and mark the whole MM as reserved. */
bootmap_size = init_bootmem_node(NODE_DATA(nid), bootmap_start,
{
struct scoop_dev *sdev = container_of(chip, struct scoop_dev, gpio);
- /* XXX: I'm usure, but it seems so */
+ /* XXX: I'm unsure, but it seems so */
return ioread16(sdev->base + SCOOP_GPRR) & (1 << (offset + 1));
}
* @brief Get next available transaction width
*
*
-* @return On sucess : Next avail able transaction width
+* @return On success : Next available transaction width
* On failure : dmacHw_TRANSACTION_WIDTH_8
*
* @note
/**
* Creates a descriptor ring from a memory mapping.
*
-* @return 0 on sucess, error code otherwise.
+* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
/*
* This __REG() version gives the same results as the one above, except
* that we are fooling gcc somehow so it generates far better and smaller
- * assembly code for access to contigous registers. It's a shame that gcc
+ * assembly code for access to contiguous registers. It's a shame that gcc
* doesn't guess this by itself.
*/
#include <asm/types.h>
writel(win_enable, PCI_BAR_ENABLE);
/*
- * Disable automatic update of address remaping when writing to BARs.
+ * Disable automatic update of address remapping when writing to BARs.
*/
orion5x_setbits(PCI_ADDR_DECODE_CTRL, 1);
}
/* BATTERY */
#define PALMLD_BAT_MAX_VOLTAGE 4000 /* 4.00V maximum voltage */
#define PALMLD_BAT_MIN_VOLTAGE 3550 /* 3.55V critical voltage */
-#define PALMLD_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMLD_BAT_MAX_CURRENT 0 /* unknown */
#define PALMLD_BAT_MIN_CURRENT 0 /* unknown */
#define PALMLD_BAT_MAX_CHARGE 1 /* unknown */
#define PALMLD_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMT5_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMT5_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMT5_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMT5_BAT_MAX_CURRENT 0 /* unknown */
#define PALMT5_BAT_MIN_CURRENT 0 /* unknown */
#define PALMT5_BAT_MAX_CHARGE 1 /* unknown */
#define PALMT5_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMTC_BAT_MAX_VOLTAGE 4000 /* 4.00V maximum voltage */
#define PALMTC_BAT_MIN_VOLTAGE 3550 /* 3.55V critical voltage */
-#define PALMTC_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMTC_BAT_MAX_CURRENT 0 /* unknown */
#define PALMTC_BAT_MIN_CURRENT 0 /* unknown */
#define PALMTC_BAT_MAX_CHARGE 1 /* unknown */
#define PALMTC_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMTE2_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMTE2_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMTE2_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMTE2_BAT_MAX_CURRENT 0 /* unknown */
#define PALMTE2_BAT_MIN_CURRENT 0 /* unknown */
#define PALMTE2_BAT_MAX_CHARGE 1 /* unknown */
#define PALMTE2_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMTX_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMTX_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMTX_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMTX_BAT_MAX_CURRENT 0 /* unknown */
#define PALMTX_BAT_MIN_CURRENT 0 /* unknown */
#define PALMTX_BAT_MAX_CHARGE 1 /* unknown */
#define PALMTX_BAT_MIN_CHARGE 1 /* unknown */
/* Battery */
#define PALMZ72_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMZ72_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMZ72_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMZ72_BAT_MAX_CURRENT 0 /* unknown */
#define PALMZ72_BAT_MIN_CURRENT 0 /* unknown */
#define PALMZ72_BAT_MAX_CHARGE 1 /* unknown */
#define PALMZ72_BAT_MIN_CHARGE 1 /* unknown */
[0] = "hsmmc",
[1] = "hsmmc",
[2] = "mmc_bus",
- /* [3] = "48m", - note not succesfully used yet */
+ /* [3] = "48m", - note not successfully used yet */
};
void s3c6400_setup_sdhci_cfg_card(struct platform_device *dev,
[0] = "hsmmc",
[1] = "hsmmc",
[2] = "mmc_bus",
- /* [3] = "48m", - note not succesfully used yet */
+ /* [3] = "48m", - note not successfully used yet */
};
/**
- * sa1100_request_dma - allocate one of the SA11x0's DMA chanels
+ * sa1100_request_dma - allocate one of the SA11x0's DMA channels
* @device: The SA11x0 peripheral targeted by this request
* @device_id: An ascii name for the claiming device
* @callback: Function to be called when the DMA completes
* setups a single pin:
* - reserves the pin so that it is not claimed by another driver
* - setups the iomux according to the configuration
- * - if the pin is configured as a GPIO, we claim it throug kernel gpiolib
+ * - if the pin is configured as a GPIO, we claim it through kernel gpiolib
*/
int mxc_iomux_alloc_pin(const unsigned int pin, const char *label);
/*
* setups a single pin:
* - reserves the pin so that it is not claimed by another driver
* - setups the iomux according to the configuration
- * - if the pin is configured as a GPIO, we claim it throug kernel gpiolib
+ * - if the pin is configured as a GPIO, we claim it through kernel gpiolib
*/
int mxc_iomux_alloc_pin(const unsigned int pin_mode, const char *label);
/*
* register to follow the ratio of duty_ns vs. period_ns
* accordingly.
*
- * This is good enought for programming the brightness of
+ * This is good enough for programming the brightness of
* the LCD backlight.
*
* The real implementation would divide PERCLK[0] first by
* OMAP_DMA_DYNAMIC_CHAIN
* @params - Channel parameters
*
- * @return - Succes : 0
+ * @return - Success : 0
* Failure: -EINVAL/-ENOMEM
*/
int omap_request_dma_chain(int dev_id, const char *dev_name,
#define TIPB_SWITCH_BASE (0xfffbc800)
#define OMAP16XX_MMCSD2_SSW_MPU_CONF (TIPB_SWITCH_BASE + 0x160)
-/* UART3 Registers Maping through MPU bus */
+/* UART3 Registers Mapping through MPU bus */
#define UART3_RHR (OMAP_UART3_BASE + 0)
#define UART3_THR (OMAP_UART3_BASE + 0)
#define UART3_DLL (OMAP_UART3_BASE + 0)
/* the calculation for the VA of this must ensure that
* it is the same distance apart from the UART in the
* phsyical address space, as the initial mapping for the IO
- * is done as a 1:1 maping. This puts it (currently) at
+ * is done as a 1:1 mapping. This puts it (currently) at
* 0xFA800000, which is not in the way of any current mapping
* by the base system.
*/
bool "Atmel AC97 Sound support"
help
This enables Sound support for the Hammerhead board. You may
- also go trough the ALSA settings to get it working.
+ also go through the ALSA settings to get it working.
Choose 'Y' here if you have ordered a Corona daugther board and
want to make your board funky.
/*
* Similar to get_user, do some address checking, then dereference
- * Return true on sucess, false on bad address
+ * Return true on success, false on bad address
*/
static bool get_instruction(unsigned short *val, unsigned short *address)
{
#define HMDMA0_CONTROL 0xFFC03300 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xFFC03304 /* HMDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xFFC03308 /* HMDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xFFC03310 /* HMDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xFFC03314 /* HMDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xFFC03318 /* HMDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xFFC03340 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xFFC03344 /* HMDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xFFC03348 /* HMDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xFFC03350 /* HMDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xFFC03354 /* HMDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xFFC03358 /* HMDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xFFC03300 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xFFC03304 /* HMDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xFFC03308 /* HMDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xFFC03310 /* HMDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xFFC03314 /* HMDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xFFC03318 /* HMDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xFFC03340 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xFFC03344 /* HMDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xFFC03348 /* HMDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xFFC03350 /* HMDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xFFC03354 /* HMDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xFFC03358 /* HMDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xFFC03300 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xFFC03304 /* HMDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xFFC03308 /* HMDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xFFC03310 /* HMDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xFFC03314 /* HMDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xFFC03318 /* HMDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xFFC03340 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xFFC03344 /* HMDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xFFC03348 /* HMDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xFFC03350 /* HMDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xFFC03354 /* HMDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xFFC03358 /* HMDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
/* Are we prepared to handle this kernel fault?
*
* (The kernel has valid exception-points in the source
- * when it acesses user-memory. When it fails in one
+ * when it accesses user-memory. When it fails in one
* of those points, we find it in a table and do a jump
* to some fixup code that loads an appropriate error
* code)
#endif
/*
- ** Not virtually contigous.
+ ** Not virtually contiguous.
** Terminate prev chunk.
** Start a new chunk.
**
(p6) br.cond.spnt .ia32_strace_check_retval
;; // prevent RAW on r8
END(ia32_ret_from_clone)
- // fall thrugh
+ // fall through
GLOBAL_ENTRY(ia32_ret_from_syscall)
PT_REGS_UNWIND_INFO(0)
unsigned long ip; /* where did the overflow interrupt happened */
unsigned long tstamp; /* ar.itc when entering perfmon intr. handler */
- unsigned short cpu; /* cpu on which the overfow occured */
+ unsigned short cpu; /* cpu on which the overflow occured */
unsigned short set; /* event set active when overflow ocurred */
int tgid; /* thread group id (for NPTL, this is getpid()) */
} pfm_default_smpl_entry_t;
#define IIO_IIDSR_LVL_SHIFT 0
#define IIO_IIDSR_LVL_MASK 0x000000ff
-/* Xtalk timeout threshhold register (IIO_IXTT) */
+/* Xtalk timeout threshold register (IIO_IXTT) */
#define IXTT_RRSP_TO_SHFT 55 /* read response timeout */
#define IXTT_RRSP_TO_MASK (0x1FULL << IXTT_RRSP_TO_SHFT)
#define IXTT_RRSP_PS_SHFT 32 /* read responsed TO prescalar */
case ESI_DESC_ENTRY_POINT:
break;
default:
- printk(KERN_WARNING "Unkown table type %d found in "
+ printk(KERN_WARNING "Unknown table type %d found in "
"ESI table, ignoring rest of table\n", *p);
return -ENODEV;
}
* IA64_THREAD_DBG_VALID set. This indicates a task which was
* able to use the debug registers for debugging purposes via
* ptrace(). Therefore we know it was not using them for
- * perfmormance monitoring, so we only decrement the number
+ * performance monitoring, so we only decrement the number
* of "ptraced" debug register users to keep the count up to date
*/
int
bra.l _real_ovfl
-# overflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# overflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
fovfl_inex_on:
bra.l _real_unfl
-# undeflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# underflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
funfl_inex_on:
tst.w %d0 # is instr fmovm?
bmi.b iea_dis_fmovm # yes
-# instruction is using an extended precision immediate operand. therefore,
+# instruction is using an extended precision immediate operand. Therefore,
# the total instruction length is 16 bytes.
iea_dis_immed:
mov.l &0x10,%d0 # 16 bytes of instruction
bge.b sok_norm2 # thank goodness no
# the multiply factor that we're trying to create should be a denorm
-# for the multiply to work. therefore, we're going to actually do a
+# for the multiply to work. Therefore, we're going to actually do a
# multiply with a denorm which will cause an unimplemented data type
# exception to be put into the machine which will be caught and corrected
# later. we don't do this with the DENORMs above because this method
#
# operand will underflow AND underflow or inexact is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fin_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# The destination was In Range and the source was a ZERO. The result,
-# therefore, is an INF w/ the proper sign.
+# Therefore, is an INF w/ the proper sign.
# So, determine the sign and return a new INF (w/ the j-bit cleared).
#
global fdiv_inf_load # global for fsgldiv
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fneg_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fabs_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# the ZEROes have opposite signs:
-# - therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
+# - Therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
# - -ZERO is returned in the case of RM.
#
fadd_zero_2_chk_rm:
#
# the ZEROes have the same signs:
-# - therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
+# - Therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
# - -ZERO is returned in the case of RM.
#
fsub_zero_2_chk_rm:
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fsqrt_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
tst.l %d0
bne.b fout_pack_set
# "mantissa" is all zero which means that the answer is zero. but, the '040
-# algorithm allows the exponent to be non-zero. the 881/2 do not. therefore,
+# algorithm allows the exponent to be non-zero. the 881/2 do not. Therefore,
# if the mantissa is zero, I will zero the exponent, too.
# the question now is whether the exponents sign bit is allowed to be non-zero
# for a zero, also...
rts
# #
-# dnrm_lp(): normalize exponent/mantissa to specified threshhold #
+# dnrm_lp(): normalize exponent/mantissa to specified threshold #
# #
# INPUT: #
# %a0 : points to the operand to be denormalized #
bgt.b unnorm_nrm_zero # yes; denorm only until exp = 0
#
-# exponent would not go < 0. therefore, number stays normalized
+# exponent would not go < 0. Therefore, number stays normalized
#
sub.w %d0, %d1 # shift exponent value
mov.w FTEMP_EX(%a0), %d0 # load old exponent
bra.l _real_ovfl
-# overflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# overflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
fovfl_inex_on:
bra.l _real_unfl
-# undeflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# underflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
funfl_inex_on:
tst.w %d0 # is instr fmovm?
bmi.b iea_dis_fmovm # yes
-# instruction is using an extended precision immediate operand. therefore,
+# instruction is using an extended precision immediate operand. Therefore,
# the total instruction length is 16 bytes.
iea_dis_immed:
mov.l &0x10,%d0 # 16 bytes of instruction
rts
# #
-# dnrm_lp(): normalize exponent/mantissa to specified threshhold #
+# dnrm_lp(): normalize exponent/mantissa to specified threshold #
# #
# INPUT: #
# %a0 : points to the operand to be denormalized #
bgt.b unnorm_nrm_zero # yes; denorm only until exp = 0
#
-# exponent would not go < 0. therefore, number stays normalized
+# exponent would not go < 0. Therefore, number stays normalized
#
sub.w %d0, %d1 # shift exponent value
mov.w FTEMP_EX(%a0), %d0 # load old exponent
tst.l %d0
bne.b fout_pack_set
# "mantissa" is all zero which means that the answer is zero. but, the '040
-# algorithm allows the exponent to be non-zero. the 881/2 do not. therefore,
+# algorithm allows the exponent to be non-zero. the 881/2 do not. Therefore,
# if the mantissa is zero, I will zero the exponent, too.
# the question now is whether the exponents sign bit is allowed to be non-zero
# for a zero, also...
#
# operand will underflow AND underflow or inexact is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fin_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# The destination was In Range and the source was a ZERO. The result,
-# therefore, is an INF w/ the proper sign.
+# Therefore, is an INF w/ the proper sign.
# So, determine the sign and return a new INF (w/ the j-bit cleared).
#
global fdiv_inf_load # global for fsgldiv
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fneg_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fabs_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# the ZEROes have opposite signs:
-# - therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
+# - Therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
# - -ZERO is returned in the case of RM.
#
fadd_zero_2_chk_rm:
#
# the ZEROes have the same signs:
-# - therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
+# - Therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
# - -ZERO is returned in the case of RM.
#
fsub_zero_2_chk_rm:
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fsqrt_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
/*
* Macintosh hardware profile data - unused, see macintosh.h for
- * resonable type values
+ * reasonable type values
*/
#define BI_MAC_VIA1BASE 0x8010 /* Mac VIA1 base address (always present) */
* It is based on demo code originally Copyright 2001 by Intel Corp, taken from
* http://www.embedded.com/showArticle.jhtml?articleID=19205567
*
- * Attempts were made, unsuccesfully, to contact the original
+ * Attempts were made, unsuccessfully, to contact the original
* author of this code (Michael Morrow, Intel). Below is the original
* copyright notice.
*
* It is based on demo code originally Copyright 2001 by Intel Corp, taken from
* http://www.embedded.com/showArticle.jhtml?articleID=19205567
*
- * Attempts were made, unsuccesfully, to contact the original
+ * Attempts were made, unsuccessfully, to contact the original
* author of this code (Michael Morrow, Intel). Below is the original
* copyright notice.
*
* It is based on demo code originally Copyright 2001 by Intel Corp, taken from
* http://www.embedded.com/showArticle.jhtml?articleID=19205567
*
- * Attempts were made, unsuccesfully, to contact the original
+ * Attempts were made, unsuccessfully, to contact the original
* author of this code (Michael Morrow, Intel). Below is the original
* copyright notice.
*
/* BIG FAT WARNING: races danger!
No protections exist here. Current users are only early init code,
- when locking is not needed because no cuncurency yet exists there,
+ when locking is not needed because no concurrency yet exists there,
and GPIO IRQ dispatcher, which does locking.
However, if many uses will ever happen, proper locking will be needed
- including locking between different uses
u32 _unused5;
u8 _write[3];
volatile u8 write;
-#define SGIOC_WRITE_NTHRESH 0x01 /* use 4.5db threshhold */
+#define SGIOC_WRITE_NTHRESH 0x01 /* use 4.5db threshold */
#define SGIOC_WRITE_TPSPEED 0x02 /* use 100ohm TP speed */
#define SGIOC_WRITE_EPSEL 0x04 /* force cable mode: 1=AUI 0=TP */
#define SGIOC_WRITE_EASEL 0x08 /* 1=autoselect 0=manual cable selection */
#define G_MAC_TXD_WEIGHT1(x) _SB_GETVALUE(x, S_MAC_TXD_WEIGHT1, M_MAC_TXD_WEIGHT1)
/*
- * MAC Fifo Threshhold registers (Table 9-14)
+ * MAC Fifo Threshold registers (Table 9-14)
* Register: MAC_THRSH_CFG_0
* Register: MAC_THRSH_CFG_1
* Register: MAC_THRSH_CFG_2
struct {
u64 rsvd1: 15,
error: 1, /* Widget rcvd wr resp pkt w/ error */
- ovflow: 5, /* Over flow count. perf measurement */
+ ovflow: 5, /* Overflow count. perf measurement */
fire_and_forget: 1, /* Launch Write without response */
mode: 2, /* Widget operation Mode */
rsvd2: 2,
if (!((asid += ASID_INC) & ASID_MASK) ) {
if (cpu_has_vtag_icache)
flush_icache_all();
- /* Traverse all online CPUs (hack requires contigous range) */
+ /* Traverse all online CPUs (hack requires contiguous range) */
for_each_online_cpu(i) {
/*
* We don't need to worry about our own CPU, nor those of
case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM):
DPDNORMX;
- /* FAAL THOROUGH */
+ /* FALL THROUGH */
case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_DNORM):
/* normalize ym,ye */
if (chip_id == SMSC_FDC37M81X_CHIP_ID)
smsc_fdc37m81x_config_end();
else {
- printk(KERN_WARNING "%s: unknow chip id 0x%02x\n", __func__,
+ printk(KERN_WARNING "%s: unknown chip id 0x%02x\n", __func__,
chip_id);
g_smsc_fdc37m81x_base = 0;
}
#define PMRN_PMLCB2 0x112 /* PM Local Control B2 */
#define PMRN_PMLCB3 0x113 /* PM Local Control B3 */
-#define PMLCB_THRESHMUL_MASK 0x0700 /* Threshhold Multiple Field */
+#define PMLCB_THRESHMUL_MASK 0x0700 /* Threshold Multiple Field */
#define PMLCB_THRESHMUL_SHIFT 8
#define PMLCB_THRESHOLD_MASK 0x003f /* Threshold Field */
{ 0x2030, 0x08 /* SIGFPE */ }, /* spe fp data */
{ 0x2040, 0x08 /* SIGFPE */ }, /* spe fp data */
{ 0x2050, 0x08 /* SIGFPE */ }, /* spe fp round */
- { 0x2060, 0x0e /* SIGILL */ }, /* performace monitor */
+ { 0x2060, 0x0e /* SIGILL */ }, /* performance monitor */
{ 0x2900, 0x08 /* SIGFPE */ }, /* apu unavailable */
{ 0x3100, 0x0e /* SIGALRM */ }, /* fixed interval timer */
{ 0x3200, 0x02 /* SIGINT */ }, /* watchdog */
mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | THRM1_TIE | THRM1_TID);
/* setup THRM2,
- * threshold, valid bit, enable interrupts, interrupt when above threshhold
+ * threshold, valid bit, enable interrupts, interrupt when above threshold
*/
mtspr (SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V | THRM1_TIE);
#else
* to a latch. The new values (interrupt setting bits, reset
* counter value etc.) are not copied to the actual registers
* until the performance monitor is enabled. In order to get
- * this to work as desired, the permormance monitor needs to
+ * this to work as desired, the performance monitor needs to
* be disabled while writing to the latches. This is a
* HW design issue.
*/
* to a latch. The new values (interrupt setting bits, reset
* counter value etc.) are not copied to the actual registers
* until the performance monitor is enabled. In order to get
- * this to work as desired, the permormance monitor needs to
+ * this to work as desired, the performance monitor needs to
* be disabled while writing to the latches. This is a
* HW design issue.
*/
};
/* ======================================================================== */
-/* PCI configuration acess */
+/* PCI configuration access */
/* ======================================================================== */
static int
* 1 -> Skip the device but act as if the access was successfull
* (return 0xff's on reads, eventually, cache config space
* accesses in a later version)
- * -1 -> Hide the device (unsuccessful acess)
+ * -1 -> Hide the device (unsuccessful access)
*/
static int u3_ht_skip_device(struct pci_controller *hose,
struct pci_bus *bus, unsigned int devfn)
dp = ((unsigned int*)tbl->it_base) + index;
- /* On U3, all memory is contigous, so we can move this
+ /* On U3, all memory is contiguous, so we can move this
* out of the loop.
*/
l = npages;
__u16 opc = *((__u16 *) opcode);
if ((opc & 0x90) == 0) { /* test if rx in {0,2,4,6} */
- /* we got an exception therfore ry can't be in {0,2,4,6} */
+ /* we got an exception therefore ry can't be in {0,2,4,6} */
asm volatile( /* load rx from fp_regs.fprs[ry] */
" bras 1,0f\n"
" ld 0,0(%1)\n"
__u16 opc = *((__u16 *) opcode);
if ((opc & 0x90) == 0) { /* test if rx in {0,2,4,6} */
- /* we got an exception therfore ry can't be in {0,2,4,6} */
+ /* we got an exception therefore ry can't be in {0,2,4,6} */
asm volatile( /* load rx from fp_regs.fprs[ry] */
" bras 1,0f\n"
" le 0,0(%1)\n"
#include <linux/types.h>
/*
- * FIXME: Acessing the desc_struct through its fields is more elegant,
+ * FIXME: Accessing the desc_struct through its fields is more elegant,
* and should be the one valid thing to do. However, a lot of open code
- * still touches the a and b acessors, and doing this allow us to do it
+ * still touches the a and b accessors, and doing this allow us to do it
* incrementally. We keep the signature as a struct, rather than an union,
* so we can get rid of it transparently in the future -- glommer
*/
/*
* generic node memory support, the following assumptions apply:
*
- * 1) memory comes in 64Mb contigious chunks which are either present or not
+ * 1) memory comes in 64Mb contiguous chunks which are either present or not
* 2) we will not have more than 64Gb in total
*
* for now assume that 64Gb is max amount of RAM for whole system
#define DESC_STATUS_SOURCE_TIMEOUT 3
/*
- * source side threshholds at which message retries print a warning
+ * source side thresholds at which message retries print a warning
*/
#define SOURCE_TIMEOUT_LIMIT 20
#define DESTINATION_TIMEOUT_LIMIT 20
if (!acpi_sci_override_gsi)
acpi_sci_ioapic_setup(acpi_gbl_FADT.sci_interrupt, 0, 0);
- /* Fill in identity legacy mapings where no override */
+ /* Fill in identity legacy mappings where no override */
mp_config_acpi_legacy_irqs();
count =
goto out;
/*
- * aperture was sucessfully enlarged by 128 MB, try
+ * aperture was successfully enlarged by 128 MB, try
* allocation again
*/
goto retry;
struct pci_dev *dev = NULL;
u16 devid, devid2;
- /* allocate passthroug domain */
+ /* allocate passthrough domain */
pt_domain = protection_domain_alloc();
if (!pt_domain)
return -ENOMEM;
return 0;
/*
- * If we are way outside a reasoable range then just skip forward:
+ * If we are way outside a reasonable range then just skip forward:
*/
if (unlikely(left <= -period)) {
left = period;
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
- * remain disabled thorough out this function.
+ * remain disabled throughout this function.
*/
static int __kprobes kprobe_handler(struct pt_regs *regs)
{
/*
* Interrupts are disabled on entry as trap1 is an interrupt gate and they
- * remain disabled thoroughout this function.
+ * remain disabled throughout this function.
*/
static int __kprobes post_kprobe_handler(struct pt_regs *regs)
{
*/
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate
- * and they remain disabled thorough out this function.
+ * and they remain disabled throughout this function.
*/
int kmmio_handler(struct pt_regs *regs, unsigned long addr)
{
/*
* Interrupts are disabled on entry as trap1 is an interrupt gate
- * and they remain disabled thorough out this function.
+ * and they remain disabled throughout this function.
* This must always get called as the pair to kmmio_handler().
*/
static int post_kmmio_handler(unsigned long condition, struct pt_regs *regs)
* Description:
* Add this blk_iopoll structure to the pending poll list and trigger the
* raise of the blk iopoll softirq. The driver must already have gotten a
- * succesful return from blk_iopoll_sched_prep() before calling this.
+ * successful return from blk_iopoll_sched_prep() before calling this.
**/
void blk_iopoll_sched(struct blk_iopoll *iop)
{
{ 0x27DF, 0x1028, 0x02b0 }, /* ICH7 on unknown Dell */
{ 0x27DF, 0x1043, 0x1267 }, /* ICH7 on Asus W5F */
{ 0x27DF, 0x103C, 0x30A1 }, /* ICH7 on HP Compaq nc2400 */
- { 0x27DF, 0x103C, 0x361a }, /* ICH7 on unkown HP */
+ { 0x27DF, 0x103C, 0x361a }, /* ICH7 on unknown HP */
{ 0x27DF, 0x1071, 0xD221 }, /* ICH7 on Hercules EC-900 */
{ 0x27DF, 0x152D, 0x0778 }, /* ICH7 on unknown Intel */
{ 0x24CA, 0x1025, 0x0061 }, /* ICH4 on ACER Aspire 2023WLMi */
/*
* SATA-FSL host controller supports a max. of (15+1) direct PRDEs, and
* chained indirect PRDEs upto a max count of 63.
- * We are allocating an array of 63 PRDEs contigiously, but PRDE#15 will
+ * We are allocating an array of 63 PRDEs contiguously, but PRDE#15 will
* be setup as an indirect descriptor, pointing to it's next
- * (contigious) PRDE. Though chained indirect PRDE arrays are
+ * (contiguous) PRDE. Though chained indirect PRDE arrays are
* supported,it will be more efficient to use a direct PRDT and
* a single chain/link to indirect PRDE array/PRDT.
*/
u32 ttl_dwords = 0;
/*
- * NOTE : direct & indirect prdt's are contigiously allocated
+ * NOTE : direct & indirect prdt's are contiguously allocated
*/
struct prde *prd = (struct prde *)&((struct command_desc *)
cmd_desc)->prdt;
IF_ERR(printk(" cause: packet time out\n");)
}
else {
- IF_ERR(printk(" cause: buffer over flow\n");)
+ IF_ERR(printk(" cause: buffer overflow\n");)
}
goto out_free_desc;
}
* @dev: device to try to bind to the driver
*
* This function returns -ENODEV if the device is not registered,
- * 1 if the device is bound sucessfully and 0 otherwise.
+ * 1 if the device is bound successfully and 0 otherwise.
*
* This function must be called with @dev->sem held. When called for a
* USB interface, @dev->parent->sem must be held as well.
break;
default:
- BT_ERR("Unknow packet type:%d", type);
+ BT_ERR("Unknown packet type:%d", type);
print_hex_dump_bytes("", DUMP_PREFIX_OFFSET, payload,
blksz * buf_block_len);
struct hci_uart *hu;
if (!hdev) {
- BT_ERR("Frame for uknown device (hdev=NULL)");
+ BT_ERR("Frame for unknown device (hdev=NULL)");
return -ENODEV;
}
*
* Added devfs support.
* Jan-11-1998, C. Scott Ananian <cananian@alumni.princeton.edu>
- * Shared /dev/zero mmaping support, Feb 2000, Kanoj Sarcar <kanoj@sgi.com>
+ * Shared /dev/zero mmapping support, Feb 2000, Kanoj Sarcar <kanoj@sgi.com>
*/
#include <linux/mm.h>
/*
* mspec_mmap
*
- * Called when mmaping the device. Initializes the vma with a fault handler
+ * Called when mmapping the device. Initializes the vma with a fault handler
* and private data structure necessary to allocate, track, and free the
* underlying pages.
*/
}
break;
case R3964_WAIT_FOR_RX_REPEAT:
- /* FALLTROUGH */
+ /* FALLTHROUGH */
case R3964_IDLE:
if (c == STX) {
/* Prevent rx_queue from overflow: */
typedef struct COST_ROUTE COST_ROUTE;
struct COST_ROUTE {
unsigned char cost; /* Cost down this link */
- unsigned char route[NODE_BYTES]; /* Nodes thorough this route */
+ unsigned char route[NODE_BYTES]; /* Nodes through this route */
};
typedef struct ROUTE_STR ROUTE_STR;
dev->dmareg |= HIFN_DMAIER_PUBDONE;
hifn_write_1(dev, HIFN_1_DMA_IER, dev->dmareg);
- dprintk("Chip %s: Public key engine has been sucessfully "
+ dprintk("Chip %s: Public key engine has been successfully "
"initialised.\n", dev->name);
}
}
/**
- * atc_desc_get - get a unsused descriptor from free_list
+ * atc_desc_get - get an unused descriptor from free_list
* @atchan: channel we want a new descriptor for
*/
static struct at_desc *atc_desc_get(struct at_dma_chan *atchan)
* This function builds the tree representation of the topology given
* by the self IDs from the latest bus reset. During the construction
* of the tree, the function checks that the self IDs are valid and
- * internally consistent. On succcess this function returns the
+ * internally consistent. On success this function returns the
* fw_node corresponding to the local card otherwise NULL.
*/
static struct fw_node *build_tree(struct fw_card *card,
* functions & device file and adds it to the master fd list.
*
* RETURNS:
- * Zero on success, error code on falure.
+ * Zero on success, error code on failure.
*/
int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
const struct drm_framebuffer_funcs *funcs)
} else if (connector->funcs->set_property)
ret = connector->funcs->set_property(connector, property, out_resp->value);
- /* store the property value if succesful */
+ /* store the property value if successful */
if (!ret)
drm_connector_property_set_value(connector, property, out_resp->value);
out:
* i915_gem_release_mmap - remove physical page mappings
* @obj: obj in question
*
- * Preserve the reservation of the mmaping with the DRM core code, but
+ * Preserve the reservation of the mmapping with the DRM core code, but
* relinquish ownership of the pages back to the system.
*
* It is vital that we remove the page mapping if we have mapped a tiled
/**
- * Curretly it is assumed that the old framebuffer is reused.
+ * Currently it is assumed that the old framebuffer is reused.
*
* LOCKING
* caller should hold the mode config lock.
/* Wrap with our custom algo which switches to DDC mode */
intel_output->ddc_bus->algo = &intel_sdvo_i2c_bit_algo;
- /* In defaut case sdvo lvds is false */
+ /* In default case sdvo lvds is false */
intel_sdvo_get_capabilities(intel_output, &sdvo_priv->caps);
if (intel_sdvo_output_setup(intel_output,
* AGP so that GPU can catch out of VRAM/AGP access
*/
if (rdev->mc.gtt_location > rdev->mc.mc_vram_size) {
- /* Enought place before */
+ /* Enough place before */
rdev->mc.vram_location = rdev->mc.gtt_location -
rdev->mc.mc_vram_size;
} else if (tmp > rdev->mc.mc_vram_size) {
- /* Enought place after */
+ /* Enough place after */
rdev->mc.vram_location = rdev->mc.gtt_location +
rdev->mc.gtt_size;
} else {
};
/**
- * Curretly it is assumed that the old framebuffer is reused.
+ * Currently it is assumed that the old framebuffer is reused.
*
* LOCKING
* caller should hold the mode config lock.
* Note that refcount can be at most 2, since during a free refcount=3
* might mean we have to allocate a new surface which might not always
* be available.
- * For example : we allocate three contigous surfaces ABC. If B is
+ * For example : we allocate three contiguous surfaces ABC. If B is
* freed, we suddenly need two surfaces to store A and C, which might
* not always be available.
*/
new_mem->mem_type == TTM_PL_SYSTEM) ||
(old_mem->mem_type == TTM_PL_SYSTEM &&
new_mem->mem_type == TTM_PL_TT)) {
- /* bind is enought */
+ /* bind is enough */
radeon_move_null(bo, new_mem);
return 0;
}
* AGP so that GPU can catch out of VRAM/AGP access
*/
if (rdev->mc.gtt_location > rdev->mc.mc_vram_size) {
- /* Enought place before */
+ /* Enough place before */
rdev->mc.vram_location = rdev->mc.gtt_location -
rdev->mc.mc_vram_size;
} else if (tmp > rdev->mc.mc_vram_size) {
- /* Enought place after */
+ /* Enough place after */
rdev->mc.vram_location = rdev->mc.gtt_location +
rdev->mc.gtt_size;
} else {
/*
* We need to use vmap to get the desired page protection
- * or to make the buffer object look contigous.
+ * or to make the buffer object look contiguous.
*/
prot = (mem->placement & TTM_PL_FLAG_CACHED) ?
PAGE_KERNEL :
}
/*
-function that update the status of the chips (temperature for exemple)
+function that update the status of the chips (temperature for example)
*/
static struct adm1029_data *adm1029_update_device(struct device *dev)
{
data->prochot_interval = lm93_read_byte(client,
LM93_REG_PROCHOT_INTERVAL);
- /* Fan Boost Termperature registers */
+ /* Fan Boost Temperature registers */
for (i = 0; i < 4; i++)
data->boost[i] = lm93_read_byte(client, LM93_REG_BOOST(i));
0 - no debugging messages
1 - some debugging messages, but none during DMA frame transmission
2 - lots of messages, including during DMA frame transmission
- (will cause undeflows if your machine is too slow!)
+ (will cause underflows if your machine is too slow!)
*/
#define DV1394_DEBUG_LEVEL 0
#define IPATH_GPIO_SCL \
(1ULL << (_IPATH_GPIO_SCL_NUM+INFINIPATH_EXTC_GPIOOE_SHIFT))
-/* keep the code below somewhat more readonable; not used elsewhere */
+/* keep the code below somewhat more readable; not used elsewhere */
#define _IPATH_HTLINK0_CRCBITS (infinipath_hwe_htclnkabyte0crcerr | \
infinipath_hwe_htclnkabyte1crcerr)
#define _IPATH_HTLINK1_CRCBITS (infinipath_hwe_htclnkbbyte0crcerr | \
* @wd: Write Data - value to set in register
* @mask: ones where data should be spliced into reg.
*
- * Basic register read/modify/write, with un-needed acesses elided. That is,
+ * Basic register read/modify/write, with un-needed accesses elided. That is,
* a mask of zero will prevent write, while a mask of 0xFF will prevent read.
* returns current (presumed, if a write was done) contents of selected
* register, or <0 if errors.
/* Set DFELTHFDR/HDR thresholds */
RXEQ_VAL(7, 8, 0, 0, 0, 0), /* FDR */
RXEQ_VAL(7, 0x21, 0, 0, 0, 0), /* HDR */
- /* Set TLTHFDR/HDR theshold */
+ /* Set TLTHFDR/HDR threshold */
RXEQ_VAL(7, 9, 2, 2, 2, 2), /* FDR */
RXEQ_VAL(7, 0x23, 2, 2, 2, 2), /* HDR */
/* Set Preamp setting 2 (ZFR/ZCNT) */
* anymore, so we do this only if selective signaling is off.
*
* Further, on 32-bit platforms, we can't use vmap() to make
- * the QP buffer virtually contigious. Thus we have to use
+ * the QP buffer virtually contiguous. Thus we have to use
* constant-sized WRs to make sure a WR is always fully within
* a single page-sized chunk.
*
INIT_DELAYED_WORK(&moduleloader_work, request_module_delayed);
ret = hp_sdc_init();
- /* after sucessfull initialization give SDC some time to settle
+ /* after successfull initialization give SDC some time to settle
* and then load the hp_sdc_mlc upper layer driver */
if (!ret)
schedule_delayed_work(&moduleloader_work,
break;
default:
- printk(KERN_WARNING PREFIX "Unkown HIL Error status (%x)!\n", data);
+ printk(KERN_WARNING PREFIX "Unknown HIL Error status (%x)!\n", data);
break;
}
#define ATMEL_WM97XX_AC97C_IRQ (29)
#define ATMEL_WM97XX_GPIO_DEFAULT (32+16) /* Pin 16 on port B. */
#else
-#error Unkown CPU, this driver only supports AT32AP700X CPUs.
+#error Unknown CPU, this driver only supports AT32AP700X CPUs.
#endif
struct continuous {
*
* Notes:
* This is a wm97xx extended touch driver to capture touch
- * data in a continuous manner on the Intel XScale archictecture
+ * data in a continuous manner on the Intel XScale architecture
*
* Features:
* - codecs supported:- WM9705, WM9712, WM9713
/* When the AC97 queue has been drained we need to allow time
* to buffer up samples otherwise we end up spinning polling
* for samples. The controller can't have a suitably low
- * threashold set to use the notifications it gives.
+ * threshold set to use the notifications it gives.
*/
schedule_timeout_uninterruptible(1);
/* When the AC97 queue has been drained we need to allow time
* to buffer up samples otherwise we end up spinning polling
* for samples. The controller can't have a suitably low
- * threashold set to use the notifications it gives.
+ * threshold set to use the notifications it gives.
*/
msleep(1);
case 0: break;
case 1: s = "unknown class"; break;
case 2: s = "unknown function"; break;
- default: s = "unkown error"; break;
+ default: s = "unknown error"; break;
}
if (s)
printk(KERN_INFO "capidrv-%d: %s from controller 0x%x function %d: %s\n",
DELIVERY - indication entered isdn_rc function
RNR=... - application had returned RNR=... after the
look ahead callback
- RNum=0 - aplication had not returned any buffer to copy
+ RNum=0 - application had not returned any buffer to copy
this indication and will copy it self
COMPLETE - XDI had copied the data to the buffers provided
bu the application and is about to issue the
}
break;
default:
- diva_mnt_internal_dprintf (0, DLI_ERR, "Unknon IDI Ind (DMA mode): %02x", Ind);
+ diva_mnt_internal_dprintf (0, DLI_ERR, "Unknown IDI Ind (DMA mode): %02x", Ind);
}
p += (this_ind_length+1);
total_length -= (4 + this_ind_length);
}
break;
default:
- diva_mnt_internal_dprintf (0, DLI_ERR, "Unknon IDI Ind: %02x", Ind);
+ diva_mnt_internal_dprintf (0, DLI_ERR, "Unknown IDI Ind: %02x", Ind);
}
}
}
switch (protocol) {
case (-1): /* used for init */
bch->state = -1;
- /* fall trough */
+ /* fall through */
case (ISDN_P_NONE):
if (bch->state == ISDN_P_NONE)
return 0; /* already in idle state */
for (i = 0; list[i].name != NULL; i++)
if (list[i].num == num)
return list[i].name;
- return "<unkown USB Error>";
+ return "<unknown USB Error>";
}
/* USB descriptor need to contain one of the following EndPoint combination: */
pr_debug("%s: pump stev GSTN CLEAR\n", ch->is->name);
break;
default:
- pr_info("u%s: nknown pump stev %x\n", ch->is->name, devt);
+ pr_info("u%s: unknown pump stev %x\n", ch->is->name, devt);
break;
}
}
break;
default:
DBG(HFCUSB_DBG_STATES,
- "HFC_USB: hfc_usb_d_l2l1: unkown state : %#x", pr);
+ "HFC_USB: hfc_usb_d_l2l1: unknown state : %#x", pr);
break;
}
}
unsigned short hl;
struct sk_buff *skb;
/*
- * we need to reserve enought space in front of
+ * we need to reserve enough space in front of
* sk_buff. old call to dev_alloc_skb only reserved
* 16 bytes, now we are looking what the driver want
*/
struct sk_buff *new_skb;
unsigned short hl;
/*
- * we need to reserve enought space in front of
+ * we need to reserve enough space in front of
* sk_buff. old call to dev_alloc_skb only reserved
* 16 bytes, now we are looking what the driver want.
*/
*
* try to accomplish several tasks:
* - reassemble any complete fragment sequence (non-null 'start'
- * indicates there is a continguous sequence present)
+ * indicates there is a contiguous sequence present)
* - discard any incomplete sequences that are below minseq -- due
* to the fact that sender always increment sequence number, if there
* is an incomplete sequence below minseq, no new fragments would
}
return 0;
}
- /* BADMUL=value - dummy 0=disable errorchk disabled (treshold multiplier) */
+ /* BADMUL=value - dummy 0=disable errorchk disabled (threshold multiplier) */
if (!strncmp(p[0], "BADMUL", 6)) {
p[0] += 6;
switch (*p[0]) {
* crossconnections and conferences via software if not possible through
* hardware. If hardware capability is available, hardware is used.
*
- * Echo: Is generated by CMX and is used to check performane of hard and
+ * Echo: Is generated by CMX and is used to check performance of hard and
* software CMX.
*
* The CMX has special functions for conferences with one, two and more
if (tm->rcnt == 1) {
if (*debug & DEBUG_L2_TEI)
tm->tei_m.printdebug(fi,
- "check req for tei %d sucessful\n", tm->l2->tei);
+ "check req for tei %d successful\n", tm->l2->tei);
mISDN_FsmChangeState(fi, ST_TEI_NOP);
} else if (tm->rcnt > 1) {
/* duplicate assignment; remove */
*
* WARNING: This driver has only been testen on Apple's
* 1.25 MHz Dual G4 (March 03). It is tuned for a CPU
- * temperatur around 57 C.
+ * temperature around 57 C.
*
* Copyright (C) 2003, 2004 Samuel Rydh (samuel@ibrium.se)
*
op_count++;
- /* loop throgh all bytes of message i */
+ /* loop through all bytes of message i */
for(j = 0; j < m[i].len; j++) {
/* write back all bytes that could have been read */
m[i].buf[j] = (le32_to_cpu(op[op_count/3]) >> ((3-(op_count%3))*8));
* search callback possible return status
*
* DVBFE_ALGO_SEARCH_SUCCESS
- * The frontend search algorithm completed and returned succesfully
+ * The frontend search algorithm completed and returned successfully
*
* DVBFE_ALGO_SEARCH_ASLEEP
* The frontend search algorithm is sleeping
if (ret)
return ret;
- err("Unkown Anysee version: %02x %02x %02x. "\
+ err("Unknown Anysee version: %02x %02x %02x. "\
"Please report the <linux-dvb@linuxtv.org>.",
hw_info[0], hw_info[1], hw_info[2]);
{ &dibusb_dib3000mb_table[9], &dibusb_dib3000mb_table[11], NULL },
{ &dibusb_dib3000mb_table[10], &dibusb_dib3000mb_table[12], NULL },
},
- { "Unkown USB1.1 DVB-T device ???? please report the name to the author",
+ { "Unknown USB1.1 DVB-T device ???? please report the name to the author",
{ &dibusb_dib3000mb_table[13], NULL },
{ &dibusb_dib3000mb_table[14], NULL },
},
*state = REMOTE_KEY_REPEAT;
break;
default:
- deb_err("unkown type of remote status: %d\n",keybuf[0]);
+ deb_err("unknown type of remote status: %d\n",keybuf[0]);
break;
}
return 0;
stream->complete(stream, b, urb->actual_length);
break;
default:
- err("unkown endpoint type in completition handler.");
+ err("unknown endpoint type in completition handler.");
return;
}
usb_submit_urb(urb,GFP_ATOMIC);
case USB_ISOC:
return usb_isoc_urb_init(stream);
default:
- err("unkown URB-type for data transfer.");
+ err("unknown URB-type for data transfer.");
return -EINVAL;
}
}
if (input_mode == AU8522_INPUT_CONTROL_REG081H_SVIDEO_CH13 ||
input_mode == AU8522_INPUT_CONTROL_REG081H_SVIDEO_CH24) {
/* Despite what the table says, for the HVR-950q we still need
- to be in CVBS mode for the S-Video input (reason uknown). */
+ to be in CVBS mode for the S-Video input (reason unknown). */
/* filter_coef_type = 3; */
filter_coef_type = 5;
} else {
- /*
+/*
cx24110 - Single Chip Satellite Channel Receiver driver module
Copyright (C) 2002 Peter Hettkamp <peter.hettkamp@htp-tel.de> based on
{0x42,0x00}, /* @ middle bytes " */
{0x43,0x00}, /* @ LSB " */
/* leave the carrier tracking loop parameters on default */
- /* leave the bit timing loop parameters at gefault */
+ /* leave the bit timing loop parameters at default */
{0x56,0x4d}, /* set the filtune voltage to 2.7V, as recommended by */
/* the cx24108 data sheet for symbol rates above 15MS/s */
{0x57,0x00}, /* @ Filter sigma delta enabled, positive */
info("detected CX24113 variant\n");
break;
case REV_CX24113:
- info("sucessfully detected\n");
+ info("successfully detected\n");
break;
default:
err("unsupported device id: %x\n", state->rev);
case BANDWIDTH_AUTO:
return -EOPNOTSUPP;
default:
- err("unkown bandwidth value.");
+ err("unknown bandwidth value.");
return -EINVAL;
}
}
switch (state->current_modulation) {
case QAM_256:
case QAM_64:
- /* Need to undestand why there are 3 lock levels here */
+ /* Need to understand why there are 3 lock levels here */
if ((buf[0] & 0x07) == 0x07)
*status |= FE_HAS_CARRIER;
break;
switch (state->current_modulation) {
case QAM_256:
case QAM_64:
- /* Need to undestand why there are 3 lock levels here */
+ /* Need to understand why there are 3 lock levels here */
if ((buf[0] & 0x07) == 0x07)
*status |= FE_HAS_CARRIER;
else
* stb0899_track
* periodically check the signal level against a specified
* threshold level and perform derotator centering.
- * called once we have a lock from a succesful search
+ * called once we have a lock from a successful search
* event.
*
* Will be called periodically called to maintain the
* use 0x15 to track VPE interrupts - increase by 1 every vpeirq() is called
*/
saa7146_write(dev, EC1SSR, (0x03<<2) | 3 );
- /* set event counter 1 treshold to maximum allowed value (rEC p55) */
+ /* set event counter 1 threshold to maximum allowed value (rEC p55) */
saa7146_write(dev, ECT1R, 0x3fff );
#endif
/* Set RPS1 Address register to point to RPS code (r108 p42) */
* use 0x15 to track VPE interrupts - increase by 1 every vpeirq() is called
*/
saa7146_write(dev, EC1SSR, (0x03<<2) | 3 );
- /* set event counter 1 treshold to maximum allowed value (rEC p55) */
+ /* set event counter 1 threshold to maximum allowed value (rEC p55) */
saa7146_write(dev, ECT1R, 0x3fff );
#endif
/* Setup BUDGETPATCH MAIN RPS1 "program" (p35) */
// use 0x03 to track RPS1 interrupts - increase by 1 every gpio3 is toggled
// use 0x15 to track VPE interrupts - increase by 1 every vpeirq() is called
saa7146_write(dev, EC1SSR, (0x03<<2) | 3 );
- // set event counter 1 treshold to maximum allowed value (rEC p55)
+ // set event counter 1 threshold to maximum allowed value (rEC p55)
saa7146_write(dev, ECT1R, 0x3fff );
#endif
// Fix VSYNC level
* http://av-usbradio.sourceforge.net/index.php
* http://sourceforge.net/projects/av-usbradio/
* Latest release of theirs project was in 2005.
- * Probably, this driver could be improved trough using their
+ * Probably, this driver could be improved through using their
* achievements (specifications given).
* Also, Faidon Liambotis <paravoid@debian.org> wrote nice driver for this radio
* in 2007. He allowed to use his driver to improve current mr800 radio driver.
cx231xx_info("No ACK after %d msec -GPIO I2C failed!",
nInit * 10);
- /* readAck
- throuth clock stretch ,slave has given a SCL signal,
- so the SDA data can be directly read. */
+ /*
+ * readAck
+ * through clock stretch, slave has given a SCL signal,
+ * so the SDA data can be directly read.
+ */
status = cx231xx_get_gpio_bit(dev, dev->gpio_dir, (u8 *)&dev->gpio_val);
if ((dev->gpio_val & 1 << dev->board.tuner_sda_gpio) == 0) {
int err, i;
/* Here we need to allocate the correct number of frontends,
- * as reflected in the cards struct. The reality is that currrently
+ * as reflected in the cards struct. The reality is that currently
* no cx23885 boards support this - yet. But, if we don't modify this
* code then the second frontend would never be allocated (later)
* and fail with error before the attach in dvb_register().
/* setup image format */
cx_andor(MO_COLOR_CTRL, 0x4000, 0x4000);
- /* setup FIFO Threshholds */
+ /* setup FIFO Thresholds */
cx_write(MO_PDMA_STHRSH, 0x0807);
cx_write(MO_PDMA_DTHRSH, 0x0807);
if ((ccdcparam->med_filt_thres < 0) ||
(ccdcparam->med_filt_thres > CCDC_MED_FILT_THRESH)) {
- dev_dbg(dev, "Invalid value of median filter thresold\n");
+ dev_dbg(dev, "Invalid value of median filter threshold\n");
return -EINVAL;
}
int (*enable_clock)(enum vpss_clock_sel clock_sel, int en);
/* select input to ccdc */
void (*select_ccdc_source)(enum vpss_ccdc_source_sel src_sel);
- /* clear wbl overlflow bit */
+ /* clear wbl overflow bit */
int (*clear_wbl_overflow)(enum vpss_wbl_sel wbl_sel);
};
};
static const __u8 ov6650_sensor_init[][8] =
{
- /* Bright, contrast, etc are set througth SCBB interface.
+ /* Bright, contrast, etc are set through SCBB interface.
* AVCAP on win2 do not send any data on this controls. */
/* Anyway, some registers appears to alter bright and constrat */
int err;
__u8 Data;
- /* some unknow command from Aiptek pocket dv and family300 */
+ /* some unknown command from Aiptek pocket dv and family300 */
reg_w(gspca_dev, 0x00, 0x0d01, 0x01);
reg_w(gspca_dev, 0x00, 0x0d03, 0x00);
{}
};
-/* Unknow camera from Ori Usbid 0x0000:0x0000 */
+/* Unknown camera from Ori Usbid 0x0000:0x0000 */
/* Based on snoops from Ori Cohen */
static const __u16 spca501c_mysterious_open_data[][3] = {
{0x02, 0x000f, 0x0005},
goto error;
break;
case MystFromOriUnknownCamera:
- /* UnKnow Ori CMOS Camera data */
+ /* Unknown Ori CMOS Camera data */
if (write_vector(gspca_dev, spca501c_mysterious_open_data))
goto error;
break;
write_vector(gspca_dev, spca501c_arowana_open_data);
break;
case MystFromOriUnknownCamera:
- /* UnKnow CMOS Camera data */
+ /* Unknown CMOS Camera data */
write_vector(gspca_dev, spca501c_mysterious_init_data);
break;
default:
count = 200;
while (--count > 0) {
msleep(10);
- /* gsmart mini2 write a each wait setting 1 ms is enought */
+ /* gsmart mini2 write a each wait setting 1 ms is enough */
/* reg_w_riv(dev, req, idx, val); */
status = reg_r_12(gspca_dev, 0x01, 0x0001, 1);
if (status == endcode) {
break;
default:
PDEBUG(D_PROBE,
- "Sensor UNKNOW_0 force Tas5130");
+ "Sensor UNKNOWN_0 force Tas5130");
sd->sensor = SENSOR_TAS5130CXX;
}
break;
/* Bit mask of PVR2_CVAL_INPUT choices which are valid for the hardware */
unsigned int input_avail_mask;
- /* Bit mask of PVR2_CVAL_INPUT choices which are currenly allowed */
+ /* Bit mask of PVR2_CVAL_INPUT choices which are currently allowed */
unsigned int input_allowed_mask;
/* Location of eeprom or a negative number if none */
wake_up(&dev->fw_data->wait_fw);
break;
default:
- printk(KERN_INFO "s2255 unknwn resp\n");
+ printk(KERN_INFO "s2255 unknown resp\n");
}
default:
pdata++;
unsigned long jpeg_markers; /* Which markers should go into the JPEG output.
* Unless you exactly know what you do, leave them untouched.
* Inluding less markers will make the resulting code
- * smaller, but there will be fewer aplications
+ * smaller, but there will be fewer applications
* which can read it.
* The presence of the APP and COM marker is
* influenced by APP0_len and COM_len ONLY! */
* Allocate memory for the i2o_block_device struct, gendisk and request
* queue and initialize them as far as no additional information is needed.
*
- * Returns a pointer to the allocated I2O Block device on succes or a
+ * Returns a pointer to the allocated I2O Block device on success or a
* negative error code on failure.
*/
static struct i2o_block_device *i2o_block_device_alloc(void)
* Removes a previously added pointer from the context list and returns
* the matching context id.
*
- * Returns context id on succes or 0 on failure.
+ * Returns context id on success or 0 on failure.
*/
u32 i2o_cntxt_list_remove(struct i2o_controller * c, void *ptr)
{
* @c: controller to which the context list belong
* @ptr: pointer to which the context id should be fetched
*
- * Returns context id which matches to the pointer on succes or 0 on
+ * Returns context id which matches to the pointer on success or 0 on
* failure.
*/
u32 i2o_cntxt_list_get_ptr(struct i2o_controller * c, void *ptr)
/*
* gru_file_mmap
*
- * Called when mmaping the device. Initializes the vma with a fault handler
+ * Called when mmapping the device. Initializes the vma with a fault handler
* and private data structure necessary to allocate, track, and free the
* underlying pages.
*/
static struct s3c24xx_mci_pdata s3cmci_def_pdata = {
/* This is currently here to avoid a number of if (host->pdata)
- * checks. Any zero fields to ensure reaonable defaults are picked. */
+ * checks. Any zero fields to ensure reasonable defaults are picked. */
};
#ifdef CONFIG_CPU_FREQ
to specify the offset instead of the absolute address
NOTE:
- With slram it's only possible to map a contigous memory region. Therfore
+ With slram it's only possible to map a contiguous memory region. Therefore
if there's a device mapped somewhere in the region specified slram will
fail to load (see kernel log if modprobe fails).
};
/* Find the (I)NFTL Media Header, and optionally also the mirror media header.
- On sucessful return, buf will contain a copy of the media header for
+ On successful return, buf will contain a copy of the media header for
further processing. id is the string to scan for, and will presumably be
either "ANAND" or "BNAND". If findmirror=1, also look for the mirror media
header. The page #s of the found media headers are placed in mh0_page and
*
* The b2 shift is there to get rid of the lowest two bits.
* We could also do addressbits[b2] >> 1 but for the
- * performace it does not make any difference
+ * performance it does not make any difference
*/
if (eccsize_mult == 1)
byte_addr = (addressbits[b1] << 4) + addressbits[b0];
* @info: The controller instance.
* @nmtd: The driver version of the MTD instance.
*
- * This routine is called after the chip probe has succesfully completed
+ * This routine is called after the chip probe has successfully completed
* and the relevant per-chip information updated. This call ensure that
* we update the internal state accordingly.
*
TBD:
* look at deferring rx frames rather than discarding (as per tulip)
* handle tx ring full as per tulip
- * performace test to tune rx_copybreak
+ * performance test to tune rx_copybreak
Most of my modifications relate to the braindead big-endian
implementation by Intel. When the i596 is operating in
readl(lp->mmio + CMD7);
return 0;
}
-/* This function is called when a packet transmission fails to complete within a resonable period, on the assumption that an interrupts have been failed or the interface is locked up. This function will reinitialize the hardware */
+/*
+ * This function is called when a packet transmission fails to complete
+ * within a reasonable period, on the assumption that an interrupt have
+ * failed or the interface is locked up. This function will reinitialize
+ * the hardware.
+ */
static void amd8111e_tx_timeout(struct net_device *dev)
{
struct amd8111e_priv* lp = netdev_priv(dev);
* DAYNA driver mode:
* Dayna DL2000/DaynaTalk PC (Half Length), COPS LT-95,
* Farallon PhoneNET PC III, Farallon PhoneNET PC II
- * Other cards possibly supported mode unkown though:
+ * Other cards possibly supported mode unknown though:
* Dayna DL2000 (Full length), COPS LT/M (Micro-Channel)
*
* Cards NOT supported by this driver but supported by the ltpc.c
#define DLNKTST 0x0010 /* Disable Link Status */
#define DAPC 0x0008 /* Disable Automatic Polarity Correction */
#define MENDECL 0x0004 /* MENDEC Loopback Mode */
-#define LRTTSEL 0x0002 /* Low Receive Treshold/Transmit Mode Select */
+#define LRTTSEL 0x0002 /* Low Receive Threshold/Transmit Mode Select */
#define PORTSEL1 0x0001 /* Port Select Bits */
#define PORTSEL2 0x8000 /* Port Select Bits */
#define INTL 0x4000 /* Internal Loopback */
if (status & ISR_OVER)
if (netif_msg_intr(adapter))
dev_warn(&pdev->dev,
- "TX/RX over flow (status = 0x%x)\n",
+ "TX/RX overflow (status = 0x%x)\n",
status & ISR_OVER);
/* link event */
* filtering capabilities. */
struct be_cmd_req_if_create {
struct be_cmd_req_hdr hdr;
- u32 version; /* ignore currntly */
+ u32 version; /* ignore currently */
u32 capability_flags;
u32 enable_flags;
u8 mac_addr[ETH_ALEN];
goto fw_exit;
}
- dev_info(&adapter->pdev->dev, "Firmware flashed succesfully\n");
+ dev_info(&adapter->pdev->dev, "Firmware flashed successfully\n");
fw_exit:
release_firmware(fw);
/* [RC 1] A flag to indicate that overflow error occurred in one of the
queues. */
#define QM_REG_OVFERROR 0x16805c
-/* [RC 7] the Q were the qverflow occurs */
+/* [RC 7] the Q where the overflow occurs */
#define QM_REG_OVFQNUM 0x168058
/* [R 16] Pause state for physical queues 15-0 */
#define QM_REG_PAUSESTATE0 0x168410
/*
* We do not use Tx completion interrupts to free DMAd Tx packets.
- * This is good for performamce but means that we rely on new Tx
+ * This is good for performance but means that we rely on new Tx
* packets arriving to run the destructors of completed packets,
* which open up space in their sockets' send queues. Sometimes
* we do not get such new packets causing Tx to stall. A single
ret = ehea_set_portspeed(port, sp);
if (!ret)
- ehea_info("%s: Port speed succesfully set: %dMbps "
+ ehea_info("%s: Port speed successfully set: %dMbps "
"%s Duplex",
port->netdev->name, port->port_speed,
port->full_duplex == 1 ? "Full" : "Half");
ret = ehea_set_portspeed(port, EHEA_SPEED_AUTONEG);
if (!ret)
- ehea_info("%s: Port speed succesfully set: %dMbps "
+ ehea_info("%s: Port speed successfully set: %dMbps "
"%s Duplex",
port->netdev->name, port->port_speed,
port->full_duplex == 1 ? "Full" : "Half");
* driver only supports standard serial hardware (8250, 16450, 16550A)
*
* This modem usually draws its supply current out of the otherwise unused
- * TXD pin of the serial port. Thus a contignuous stream of 0x00-bytes
+ * TXD pin of the serial port. Thus a contiguous stream of 0x00-bytes
* is transmitted to achieve a positive supply voltage.
*
* hsk: This is a 4800 baud FSK modem, designed for TNC use. It works fine
unsigned long done;
int i = 1;
- /* FIXME: skbs are continguous in real addresses. Do we
+ /* FIXME: skbs are contiguous in real addresses. Do we
* really need to break it into PAGE_SIZE chunks, or can we do
* it just at the granularity of iSeries real->absolute
* mapping? Indeed, given the way the allocator works, can we
TBD:
* look at deferring rx frames rather than discarding (as per tulip)
* handle tx ring full as per tulip
- * performace test to tune rx_copybreak
+ * performance test to tune rx_copybreak
Most of my modifications relate to the braindead big-endian
implementation by Intel. When the i596 is operating in
TBD:
* look at deferring rx frames rather than discarding (as per tulip)
* handle tx ring full as per tulip
- * performace test to tune rx_copybreak
+ * performance test to tune rx_copybreak
Most of my modifications relate to the braindead big-endian
implementation by Intel. When the i596 is operating in
en_dbg(DRV, priv, "Freeing fragment:%d\n", nr);
dma = be64_to_cpu(rx_desc->data[nr].addr);
- en_dbg(DRV, priv, "Unmaping buffer at dma:0x%llx\n", (u64) dma);
+ en_dbg(DRV, priv, "Unmapping buffer at dma:0x%llx\n", (u64) dma);
pci_unmap_single(mdev->pdev, dma, skb_frags[nr].size,
PCI_DMA_FROMDEVICE);
put_page(skb_frags[nr].page);
static int inline_thold __read_mostly = MAX_INLINE;
module_param_named(inline_thold, inline_thold, int, 0444);
-MODULE_PARM_DESC(inline_thold, "treshold for using inline data");
+MODULE_PARM_DESC(inline_thold, "threshold for using inline data");
int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring *ring, u32 size,
#define MLX4_EN_DEF_RX_PAUSE 1
#define MLX4_EN_DEF_TX_PAUSE 1
-/* Interval between sucessive polls in the Tx routine when polling is used
+/* Interval between successive polls in the Tx routine when polling is used
instead of interrupts (in per-core Tx rings) - should be power of 2 */
#define MLX4_EN_TX_POLL_MODER 16
#define MLX4_EN_TX_POLL_TIMEOUT (HZ / 4)
* @card: card structure
* @descr: descriptor to re-init
*
- * return 0 on succes, <0 on failure
+ * return 0 on success, <0 on failure
*
* allocates a new rx skb, iommu-maps it and attaches it to the descriptor.
* Activate the descriptor state-wise
sis_priv->rx_ring[entry].bufptr, RX_BUF_SIZE,
PCI_DMA_FROMDEVICE);
- /* refill the Rx buffer, what if there is not enought
+ /* refill the Rx buffer, what if there is not enough
* memory for new socket buffer ?? */
if ((skb = dev_alloc_skb(RX_BUF_SIZE)) == NULL) {
/*
}
/* This situation should never happen, but due to
- some unknow bugs, it is possible that
+ some unknown bugs, it is possible that
we are working on NULL sk_buff :-( */
if (sis_priv->rx_skbuff[entry] == NULL) {
if (netif_msg_rx_err(sis_priv))
*/
u_long mac_d_max ; /* MAC : D_Max timer value */
- u_long lct_short ; /* LCT : error threshhold */
- u_long lct_medium ; /* LCT : error threshhold */
- u_long lct_long ; /* LCT : error threshhold */
- u_long lct_extended ; /* LCT : error threshhold */
+ u_long lct_short ; /* LCT : error threshold */
+ u_long lct_medium ; /* LCT : error threshold */
+ u_long lct_long ; /* LCT : error threshold */
+ u_long lct_extended ; /* LCT : error threshold */
} ;
#ifdef DEBUG
}
break;
default:
- printk("ioctl for %s: unknow cmd: %04x\n", dev->name, ioc.cmd);
+ printk("ioctl for %s: unknown cmd: %04x\n", dev->name, ioc.cmd);
status = -EOPNOTSUPP;
} // switch
SMSC_TRACE(HW, "Passed Loop Back Test");
#endif /* USE_PHY_WORK_AROUND */
- SMSC_TRACE(HW, "phy initialised succesfully");
+ SMSC_TRACE(HW, "phy initialised successfully");
return 0;
}
#define SMSC_NAPI_WEIGHT 16
/* implements a PHY loopback test at initialisation time, to ensure a packet
- * can be succesfully looped back */
+ * can be successfully looped back */
#define USE_PHY_WORK_AROUND
#define DPRINTK(nlevel, klevel, fmt, args...) \
* @card: card structure
* @descr: descriptor to re-init
*
- * Return 0 on succes, <0 on failure.
+ * Return 0 on success, <0 on failure.
*
* Allocates a new rx skb, iommu-maps it and attaches it to the
* descriptor. Mark the descriptor as activated, ready-to-use.
" (threshold = %d)\n", txmode);
csr6 &= ~DMA_CONTROL_TSF;
csr6 &= DMA_CONTROL_TC_TX_MASK;
- /* Set the transmit threashold */
+ /* Set the transmit threshold */
if (txmode <= 32)
csr6 |= DMA_CONTROL_TTC_32;
else if (txmode <= 64)
#define DMA_CONTROL_DT 0x04000000 /* Disable Drop TCP/IP csum error */
#define DMA_CONTROL_RSF 0x02000000 /* Receive Store and Forward */
#define DMA_CONTROL_DFF 0x01000000 /* Disaable flushing */
-/* Theshold for Activating the FC */
+/* Threshold for Activating the FC */
enum rfa {
act_full_minus_1 = 0x00800000,
act_full_minus_2 = 0x00800200,
act_full_minus_3 = 0x00800400,
act_full_minus_4 = 0x00800600,
};
-/* Theshold for Deactivating the FC */
+/* Threshold for Deactivating the FC */
enum rfd {
deac_full_minus_1 = 0x00400000,
deac_full_minus_2 = 0x00400800,
smctr_malloc(dev, 1L);
/* Allocate Non-MAC receive data buffers.
- * To guarantee a minimum of 256 contigous memory to
+ * To guarantee a minimum of 256 contiguous memory to
* UM_Receive_Packet's lookahead pointer, before a page
* change or ring end is encountered, place each rx buffer on
* a 256 byte boundary.
prop = of_get_property(np, "tx-clock", NULL);
if (!prop) {
printk(KERN_ERR
- "ucc_geth: mising tx-clock-name property\n");
+ "ucc_geth: missing tx-clock-name property\n");
return -EINVAL;
}
if ((*prop < QE_CLK_NONE) || (*prop > QE_CLK24)) {
frames) received that were between 128
(Including FCS length==4) and 255 octets */
u32 txok; /* Total number of octets residing in frames
- that where involved in succesfull
+ that where involved in successfull
transmission */
u16 txcf; /* Total number of PAUSE control frames
transmitted by this MAC */
u8 res4[0x2];
u32 tmca; /* Total number of frames that were transmitted
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
u32 tbca; /* Total number of frames transmitted
- succesfully that had destination address
+ successfully that had destination address
field equal to the broadcast address */
u32 rxfok; /* Total number of frames received OK */
u32 rxbok; /* Total number of octets received OK */
HW because it includes octets in frames that
never even reach the UCC */
u32 rmca; /* Total number of frames that were received
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
- u32 rbca; /* Total number of frames received succesfully
+ u32 rbca; /* Total number of frames received successfully
that had destination address equal to the
broadcast address */
u32 scar; /* Statistics carry register */
frames) received that were between 128
(Including FCS length==4) and 255 octets */
u32 txok; /* Total number of octets residing in frames
- that where involved in succesfull
+ that where involved in successfull
transmission */
u16 txcf; /* Total number of PAUSE control frames
transmitted by this MAC */
u32 tmca; /* Total number of frames that were transmitted
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
u32 tbca; /* Total number of frames transmitted
- succesfully that had destination address
+ successfully that had destination address
field equal to the broadcast address */
u32 rxfok; /* Total number of frames received OK */
u32 rxbok; /* Total number of octets received OK */
HW because it includes octets in frames that
never even reach the UCC */
u32 rmca; /* Total number of frames that were received
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
- u32 rbca; /* Total number of frames received succesfully
+ u32 rbca; /* Total number of frames received successfully
that had destination address equal to the
broadcast address */
} __attribute__ ((packed));
mii_nway_restart(&dev->mii);
if (netif_msg_ifup(dev))
- devdbg(dev, "phy initialised succesfully");
+ devdbg(dev, "phy initialised successfully");
return 0;
}
sc->lmc_media = &lmc_t1_media;
break;
default:
- printk(KERN_WARNING "%s: LMC UNKOWN CARD!\n", dev->name);
+ printk(KERN_WARNING "%s: LMC UNKNOWN CARD!\n", dev->name);
break;
}
* device. See the file header for the format. Run all checks on the
* buffer header, then run over each payload's descriptors, verify
* their consistency and act on each payload's contents. If
- * everything is succesful, update the device's statistics.
+ * everything is successful, update the device's statistics.
*
* Note: You need to set the skb to contain only the length of the
* received buffer; for that, use skb_trim(skb, RECEIVED_SIZE).
/*
* This code is used to optimize rf gain on different environments
- * (temprature mostly) based on feedback from a power detector.
+ * (temperature mostly) based on feedback from a power detector.
*
* It's only used on RF5111 and RF5112, later RF chips seem to have
* auto adjustment on hw -notice they have a much smaller BANK 7 and
/* Fill curves in reverse order
* from lower power (max gain)
* to higher power. Use curve -> idx
- * backmaping we did on eeprom init */
+ * backmapping we did on eeprom init */
u8 idx = pdg_curve_to_idx[pdg];
/* Grab the needed curves by index */
/* Now we have a set of curves for this
* channel on tmpL (x range is table_max - table_min
* and y values are tmpL[pdg][]) sorted in the same
- * order as EEPROM (because we've used the backmaping).
+ * order as EEPROM (because we've used the backmapping).
* So for RF5112 it's from higher power to lower power
* and for RF2413 it's from lower power to higher power.
* For RF5111 we only have one curve. */
* Since this probe succeeded, we allow the next
* probe twice as soon. This allows the maxRate
* to move up faster if the probes are
- * succesful.
+ * successful.
*/
ath_rc_priv->probe_time =
now_msec - rate_table->probe_interval / 2;
/* get number of entries */
field_count = *(((u16 *) & field_info) + 1);
- /* abort if no enought memory */
+ /* abort if no enough memory */
total_length = field_len * field_count;
if (total_length > *len) {
*len = total_length;
IPW_MAX_BDS)) {
/* TODO: Support merging buffers if more than
* IPW_MAX_BDS are used */
- IPW_DEBUG_INFO("%s: Maximum BD theshold exceeded. "
+ IPW_DEBUG_INFO("%s: Maximum BD threshold exceeded. "
"Increase fragmentation level.\n",
priv->net_dev->name);
}
range->max_qual.updated = 7; /* Updated all three */
range->avg_qual.qual = 70; /* > 8% missed beacons is 'bad' */
- /* TODO: Find real 'good' to 'bad' threshol value for RSSI */
+ /* TODO: Find real 'good' to 'bad' threshold value for RSSI */
range->avg_qual.level = 20 + IPW2100_RSSI_TO_DBM;
range->avg_qual.noise = 0;
range->avg_qual.updated = 7; /* Updated all three */
/* get number of entries */
field_count = *(((u16 *) & field_info) + 1);
- /* abort if not enought memory */
+ /* abort if not enough memory */
total_len = field_len * field_count;
if (total_len > *len) {
*len = total_len;
case SEC_LEVEL_0:
break;
default:
- printk(KERN_ERR "Unknow security level %d\n",
+ printk(KERN_ERR "Unknown security level %d\n",
priv->ieee->sec.level);
break;
}
range->max_qual.updated = 7; /* Updated all three */
range->avg_qual.qual = 70;
- /* TODO: Find real 'good' to 'bad' threshol value for RSSI */
+ /* TODO: Find real 'good' to 'bad' threshold value for RSSI */
range->avg_qual.level = 0; /* FIXME to real average level */
range->avg_qual.noise = 0;
range->avg_qual.updated = 7; /* Updated all three */
case SEC_LEVEL_0:
break;
default:
- printk(KERN_ERR "Unknow security level %d\n",
+ printk(KERN_ERR "Unknown security level %d\n",
priv->ieee->sec.level);
break;
}
ieee->host_decrypt = 1;
ieee->host_mc_decrypt = 1;
- /* Host fragementation in Open mode. Default is enabled.
+ /* Host fragmentation in Open mode. Default is enabled.
* Note: host fragmentation is always enabled if host encryption
* is enabled. For cards can do hardware encryption, they must do
* hardware fragmentation as well. So we don't need a variable
/*
* iwm_hal_send_host_cmd(): sends commands to the UMAC or the LMAC.
* Sending command to the LMAC is equivalent to sending a
- * regular UMAC command with the LMAC passtrough or the LMAC
+ * regular UMAC command with the LMAC passthrough or the LMAC
* wrapper UMAC command IDs.
*/
int iwm_hal_send_host_cmd(struct iwm_priv *iwm,
kfree_skb(packet->skb);
break;
default:
- IWM_ERR(iwm, "Unknow ticket action: %d\n",
+ IWM_ERR(iwm, "Unknown ticket action: %d\n",
le16_to_cpu(ticket_node->ticket->action));
kfree_skb(packet->skb);
}
}
if (i == ARRAY_SIZE(if_sdio_models)) {
- lbs_pr_err("unkown card model 0x%x\n", card->model);
+ lbs_pr_err("unknown card model 0x%x\n", card->model);
ret = -ENODEV;
goto free;
}
return 0;
}
-/* Setting policy also clears the MAC acl, even if we don't change the defaut
+/* Setting policy also clears the MAC acl, even if we don't change the default
* policy
*/
case DOT11_OID_BEACON:
send_formatted_event(priv,
- "Received a beacon from an unkown AP",
+ "Received a beacon from an unknown AP",
mlme, 0);
break;
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 100
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 121
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 120
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 120
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 120
#endif
/* Prevent reentrancy. We need to do that because we may have
- * multiple interrupt handler running concurently.
+ * multiple interrupt handler running concurrently.
* It is safe because interrupts are disabled before aquiring
* the spinlock. */
spin_lock(&lp->spinlock);
* zd_mac_tx_failed - callback for failed frames
* @dev: the mac80211 wireless device
*
- * This function is called if a frame couldn't be succesfully be
+ * This function is called if a frame couldn't be successfully be
* transferred. The first frame from the tx queue, will be selected and
* reported as error to the upper layers.
*/
* Mark the I/O Pdir entries invalid and blow away the corresponding I/O
* TLB entries.
*
- * FIXME: at some threshhold it might be "cheaper" to just blow
+ * FIXME: at some threshold it might be "cheaper" to just blow
* away the entire I/O TLB instead of individual entries.
*
* FIXME: Uturn has 256 TLB entries. We don't need to purge every
* The speeds are stored on handles
* (FANA:FAN9), (FANC:FANB), (FANE:FAND).
*
- * There are three default speed sets, acessible as handles:
+ * There are three default speed sets, accessible as handles:
* FS1L,FS1M,FS1H; FS2L,FS2M,FS2H; FS3L,FS3M,FS3H
*
* ACPI DSDT switches which set is in use depending on various
return (unsigned char *)p;
break;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
case SMALL_TAG_END:
return p + 2;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
return (unsigned char *)p;
break;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
return (unsigned char *)p;
break;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
/**
* struct ps3_sys_manager_header - System manager message header.
* @version: Header version, currently 1.
- * @size: Header size in bytes, curently 16.
+ * @size: Header size in bytes, currently 16.
* @payload_size: Message payload size in bytes.
* @service_id: Message type, one of enum ps3_sys_manager_service_id.
* @request_tag: Unique number to identify reply.
goto err_io;
}
- /* Make sure frequency measurment mode, test modes, and lock
+ /* Make sure frequency measurement mode, test modes, and lock
* are all disabled */
v3020_set_reg(chip, V3020_STATUS_0, 0x0);
}
rc = raw3270_start(view, rq);
if (rc == 0) {
- /* Started sucessfully. Now wait for completion. */
+ /* Started successfully. Now wait for completion. */
wait_event(fp->wait, raw3270_request_final(rq));
}
} while (rc == -EACCES);
chpid_to_chp(chpid)->state = onoff;
}
-/* On succes return 0 if channel-path is varied offline, 1 if it is varied
+/* On success return 0 if channel-path is varied offline, 1 if it is varied
* online. Return -ENODEV if channel-path is not registered. */
int chp_get_status(struct chp_id chpid)
{
* block of memory, which can not be moved as long as any channel
* is active. Therefore, a maximum number of subchannels needs to
* be defined somewhere. This is a module parameter, defaulting to
- * a resonable value of 1024, or 32 kb of memory.
+ * a reasonable value of 1024, or 32 kb of memory.
* Current kernels don't allow kmalloc with more than 128kb, so the
* maximum is 4096.
*/
#define ENVCTRL_CPUTEMP_MON 1 /* cpu temperature monitor */
#define ENVCTRL_CPUVOLTAGE_MON 2 /* voltage monitor */
#define ENVCTRL_FANSTAT_MON 3 /* fan status monitor */
-#define ENVCTRL_ETHERTEMP_MON 4 /* ethernet temperarture */
+#define ENVCTRL_ETHERTEMP_MON 4 /* ethernet temperature */
/* monitor */
#define ENVCTRL_VOLTAGESTAT_MON 5 /* voltage status monitor */
#define ENVCTRL_MTHRBDTEMP_MON 6 /* motherboard temperature */
-#define ENVCTRL_SCSITEMP_MON 7 /* scsi temperarture */
+#define ENVCTRL_SCSITEMP_MON 7 /* scsi temperature */
#define ENVCTRL_GLOBALADDR_MON 8 /* global address */
/* Child device type.
unsigned long flags;
int handled = 0;
- /* Use the host lock to serialise acess to the 53c700
+ /* Use the host lock to serialise access to the 53c700
* hardware. Note: In future, we may need to take the queue
* lock to enter the done routines. When that happens, we
* need to ensure that for this driver, the host lock and the
/*
* The adapter interface specs all queues to be located in the same
- * physically contigous block. The host structure that defines the
+ * physically contiguous block. The host structure that defines the
* commuication queues will assume they are each a separate physically
- * contigous memory region that will support them all being one big
- * contigous block.
+ * contiguous memory region that will support them all being one big
+ * contiguous block.
* There is a command and response queue for each level and direction of
* commuication. These regions are accessed by both the host and adapter.
*/
spin_lock_init(&dev->fib_lock);
/*
- * Allocate the physically contigous space for the commuication
+ * Allocate the physically contiguous space for the commuication
* queue headers.
*/
scbdma_tohost_done:
test CCSCBCTL, CCARREN jz fill_qoutfifo_dmadone;
/*
- * An SCB has been succesfully uploaded to the host.
+ * An SCB has been successfully uploaded to the host.
* If the SCB was uploaded for some reason other than
* bad SCSI status (currently only for underruns), we
* queue the SCB for normal completion. Otherwise, we
/*
* Although the driver does not care about the
* 'Selection in Progress' status bit, the busy
- * LED does. SELINGO is only cleared by a sucessfull
+ * LED does. SELINGO is only cleared by a successfull
* selection, so we must manually clear it to insure
* the LED turns off just incase no future successful
* selections occur (e.g. no devices on the bus).
/*
* Although the driver does not care about the
* 'Selection in Progress' status bit, the busy
- * LED does. SELINGO is only cleared by a sucessfull
+ * LED does. SELINGO is only cleared by a successfull
* selection, so we must manually clear it to insure
* the LED turns off just incase no future successful
* selections occur (e.g. no devices on the bus).
* Port operational type (in sync with SNIA port type).
*/
enum bfa_pport_type {
- BFA_PPORT_TYPE_UNKNOWN = 1, /* port type is unkown */
+ BFA_PPORT_TYPE_UNKNOWN = 1, /* port type is unknown */
BFA_PPORT_TYPE_TRUNKED = 2, /* Trunked mode */
BFA_PPORT_TYPE_NPORT = 5, /* P2P with switched fabric */
BFA_PPORT_TYPE_NLPORT = 6, /* public loop */
* Temperature sensor status values
*/
enum bfa_tsensor_status {
- BFA_TSENSOR_STATUS_UNKNOWN = 1, /* unkown status */
+ BFA_TSENSOR_STATUS_UNKNOWN = 1, /* unknown status */
BFA_TSENSOR_STATUS_FAULTY = 2, /* sensor is faulty */
BFA_TSENSOR_STATUS_BELOW_MIN = 3, /* temperature below mininum */
BFA_TSENSOR_STATUS_NOMINAL = 4, /* normal temperature */
atomic_read(&hba->resetting) == 0, 60 * HZ);
if (atomic_read(&hba->resetting)) {
- /* IOP is in unkown state, abort reset */
+ /* IOP is in unknown state, abort reset */
printk(KERN_ERR "scsi%d: reset failed\n", hba->host->host_no);
return -1;
}