D: The Linux Support Team Erlangen
N: Andreas Koensgen
-E: ajk@iehk.rwth-aachen.de
+E: ajk@comnets.uni-bremen.de
D: 6pack driver for AX.25
N: Harald Koerfgen
D: Soundblaster driver fixes, ISAPnP quirk
S: California, USA
+N: Jonathan Layes
+D: ARPD support
+
N: Tom Lees
E: tom@lpsg.demon.co.uk
W: http://www.lpsg.demon.co.uk/
S: 2612 XV Delft
S: The Netherlands
+N: Thomas Woller
+D: CS461x Cirrus Logic sound driver
+
N: David Woodhouse
E: dwmw2@infradead.org
D: JFFS2 file system, Memory Technology Device subsystem,
Date: May 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
- This is the smallest unit the storage device can write
- without resorting to read-modify-write operation. It is
- usually the same as the logical block size but may be
- bigger. One example is SATA drives with 4KB sectors
- that expose a 512-byte logical block size to the
- operating system.
+ This is the smallest unit a physical storage device can
+ write atomically. It is usually the same as the logical
+ block size but may be bigger. One example is SATA
+ drives with 4KB sectors that expose a 512-byte logical
+ block size to the operating system. For stacked block
+ devices the physical_block_size variable contains the
+ maximum physical_block_size of the component devices.
What: /sys/block/<disk>/queue/minimum_io_size
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
- Storage devices may report a preferred minimum I/O size,
- which is the smallest request the device can perform
- without incurring a read-modify-write penalty. For disk
- drives this is often the physical block size. For RAID
- arrays it is often the stripe chunk size.
+ Storage devices may report a granularity or preferred
+ minimum I/O size which is the smallest request the
+ device can perform without incurring a performance
+ penalty. For disk drives this is often the physical
+ block size. For RAID arrays it is often the stripe
+ chunk size. A properly aligned multiple of
+ minimum_io_size is the preferred request size for
+ workloads where a high number of I/O operations is
+ desired.
What: /sys/block/<disk>/queue/optimal_io_size
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report an optimal I/O size, which is
- the device's preferred unit of receiving I/O. This is
- rarely reported for disk drives. For RAID devices it is
- usually the stripe width or the internal block size.
+ the device's preferred unit for sustained I/O. This is
+ rarely reported for disk drives. For RAID arrays it is
+ usually the stripe width or the internal track size. A
+ properly aligned multiple of optimal_io_size is the
+ preferred request size for workloads where sustained
+ throughput is desired. If no optimal I/O size is
+ reported this file contains 0.
</para>
<programlisting>
-__u32 ipaddress;
-printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
+__be32 ipaddress;
+printk(KERN_INFO "my ip: %pI4\n", &ipaddress);
</programlisting>
<para>
obj = kmem_cache_alloc(...);
lock_chain(); // typically a spin_lock()
obj->key = key;
-atomic_inc(&obj->refcnt);
/*
* we need to make sure obj->key is updated before obj->next
+ * or obj->refcnt
*/
smp_wmb();
+atomic_set(&obj->refcnt, 1);
hlist_add_head_rcu(&obj->obj_node, list);
unlock_chain(); // typically a spin_unlock()
obj = kmem_cache_alloc(cachep);
lock_chain(); // typically a spin_lock()
obj->key = key;
+/*
+ * changes to obj->key must be visible before refcnt one
+ */
+smp_wmb();
atomic_set(&obj->refcnt, 1);
/*
* insert obj in RCU way (readers might be traversing chain)
For SA11xx and Xscale, this is used to
setup a minicache mapping.
+ffff4000 ffffffff cache aliasing on ARMv6 and later CPUs.
+
ffff1000 ffff7fff Reserved.
Platforms must not use this address range.
/*
* cn_test.c
*
- * 2004-2005 Copyright (c) Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ * 2004+ Copyright (c) Evgeniy Polyakov <zbr@ioremap.net>
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
module_exit(cn_test_fini);
MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Evgeniy Polyakov <johnpol@2ka.mipt.ru>");
+MODULE_AUTHOR("Evgeniy Polyakov <zbr@ioremap.net>");
MODULE_DESCRIPTION("Connector's test module");
/*
* ucon.c
*
- * Copyright (c) 2004+ Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ * Copyright (c) 2004+ Evgeniy Polyakov <zbr@ioremap.net>
*
*
* This program is free software; you can redistribute it and/or modify
There are user and developer mailing lists available through the v9fs project
on sourceforge (http://sourceforge.net/projects/v9fs).
+A stand-alone version of the module (which should build for any 2.6 kernel)
+is available via (http://github.com/ericvh/9p-sac/tree/master)
+
News and other information is maintained on SWiK (http://swik.net/v9fs).
Bug reports may be issued through the kernel.org bugzilla
(*) Security (currently only AFS kaserver and KerberosIV tickets).
- (*) File reading.
+ (*) File reading and writing.
(*) Automounting.
-It does not yet support the following AFS features:
-
- (*) Write support.
+ (*) Local caching (via fscache).
- (*) Local caching.
+It does not yet support the following AFS features:
(*) pioctl() system call.
the masks in the following files:
/sys/module/af_rxrpc/parameters/debug
- /sys/module/afs/parameters/debug
+ /sys/module/kafs/parameters/debug
=====
When inserting the driver modules the root cell must be specified along with a
list of volume location server IP addresses:
- insmod af_rxrpc.o
- insmod rxkad.o
- insmod kafs.o rootcell=cambridge.redhat.com:172.16.18.73:172.16.18.91
+ modprobe af_rxrpc
+ modprobe rxkad
+ modprobe kafs rootcell=cambridge.redhat.com:172.16.18.73:172.16.18.91
The first module is the AF_RXRPC network protocol driver. This provides the
RxRPC remote operation protocol and may also be accessed from userspace. See:
Once the module has been loaded, more modules can be added by the following
procedure:
- echo add grand.central.org 18.7.14.88:128.2.191.224 >/proc/fs/afs/cells
+ echo add grand.central.org 18.9.48.14:128.2.203.61:130.237.48.87 >/proc/fs/afs/cells
Where the parameters to the "add" command are the name of a cell and a list of
volume location servers within that cell, with the latter separated by colons.