<title>Two Main Types of Kernel Locks: Spinlocks and Semaphores</title>
<para>
- There are two main types of kernel locks. The fundamental type
+ There are three main types of kernel locks. The fundamental type
is the spinlock
(<filename class="headerfile">include/asm/spinlock.h</filename>),
which is a very simple single-holder lock: if you can't get the
very small and fast, and can be used anywhere.
</para>
<para>
- The second type is a semaphore
+ The second type is a mutex
+ (<filename class="headerfile">include/linux/mutex.h</filename>): it
+ is like a spinlock, but you may block holding a mutex.
+ If you can't lock a mutex, your task will suspend itself, and be woken
+ up when the mutex is released. This means the CPU can do something
+ else while you are waiting. There are many cases when you simply
+ can't sleep (see <xref linkend="sleeping-things"/>), and so have to
+ use a spinlock instead.
+ </para>
+ <para>
+ The third type is a semaphore
(<filename class="headerfile">include/asm/semaphore.h</filename>): it
can have more than one holder at any time (the number decided at
initialization time), although it is most commonly used as a
- single-holder lock (a mutex). If you can't get a semaphore,
- your task will put itself on the queue, and be woken up when the
- semaphore is released. This means the CPU will do something
- else while you are waiting, but there are many cases when you
- simply can't sleep (see <xref linkend="sleeping-things"/>), and so
- have to use a spinlock instead.
+ single-holder lock (a mutex). If you can't get a semaphore, your
+ task will be suspended and later on woken up - just like for mutexes.
</para>
<para>
Neither type of lock is recursive: see
--- /dev/null
+Generic Mutex Subsystem
+
+started by Ingo Molnar <mingo@redhat.com>
+
+ "Why on earth do we need a new mutex subsystem, and what's wrong
+ with semaphores?"
+
+firstly, there's nothing wrong with semaphores. But if the simpler
+mutex semantics are sufficient for your code, then there are a couple
+of advantages of mutexes:
+
+ - 'struct mutex' is smaller on most architectures: .e.g on x86,
+ 'struct semaphore' is 20 bytes, 'struct mutex' is 16 bytes.
+ A smaller structure size means less RAM footprint, and better
+ CPU-cache utilization.
+
+ - tighter code. On x86 i get the following .text sizes when
+ switching all mutex-alike semaphores in the kernel to the mutex
+ subsystem:
+
+ text data bss dec hex filename
+ 3280380 868188 396860 4545428 455b94 vmlinux-semaphore
+ 3255329 865296 396732 4517357 44eded vmlinux-mutex
+
+ that's 25051 bytes of code saved, or a 0.76% win - off the hottest
+ codepaths of the kernel. (The .data savings are 2892 bytes, or 0.33%)
+ Smaller code means better icache footprint, which is one of the
+ major optimization goals in the Linux kernel currently.
+
+ - the mutex subsystem is slightly faster and has better scalability for
+ contended workloads. On an 8-way x86 system, running a mutex-based
+ kernel and testing creat+unlink+close (of separate, per-task files)
+ in /tmp with 16 parallel tasks, the average number of ops/sec is:
+
+ Semaphores: Mutexes:
+
+ $ ./test-mutex V 16 10 $ ./test-mutex V 16 10
+ 8 CPUs, running 16 tasks. 8 CPUs, running 16 tasks.
+ checking VFS performance. checking VFS performance.
+ avg loops/sec: 34713 avg loops/sec: 84153
+ CPU utilization: 63% CPU utilization: 22%
+
+ i.e. in this workload, the mutex based kernel was 2.4 times faster
+ than the semaphore based kernel, _and_ it also had 2.8 times less CPU
+ utilization. (In terms of 'ops per CPU cycle', the semaphore kernel
+ performed 551 ops/sec per 1% of CPU time used, while the mutex kernel
+ performed 3825 ops/sec per 1% of CPU time used - it was 6.9 times
+ more efficient.)
+
+ the scalability difference is visible even on a 2-way P4 HT box:
+
+ Semaphores: Mutexes:
+
+ $ ./test-mutex V 16 10 $ ./test-mutex V 16 10
+ 4 CPUs, running 16 tasks. 8 CPUs, running 16 tasks.
+ checking VFS performance. checking VFS performance.
+ avg loops/sec: 127659 avg loops/sec: 181082
+ CPU utilization: 100% CPU utilization: 34%
+
+ (the straight performance advantage of mutexes is 41%, the per-cycle
+ efficiency of mutexes is 4.1 times better.)
+
+ - there are no fastpath tradeoffs, the mutex fastpath is just as tight
+ as the semaphore fastpath. On x86, the locking fastpath is 2
+ instructions:
+
+ c0377ccb <mutex_lock>:
+ c0377ccb: f0 ff 08 lock decl (%eax)
+ c0377cce: 78 0e js c0377cde <.text.lock.mutex>
+ c0377cd0: c3 ret
+
+ the unlocking fastpath is equally tight:
+
+ c0377cd1 <mutex_unlock>:
+ c0377cd1: f0 ff 00 lock incl (%eax)
+ c0377cd4: 7e 0f jle c0377ce5 <.text.lock.mutex+0x7>
+ c0377cd6: c3 ret
+
+ - 'struct mutex' semantics are well-defined and are enforced if
+ CONFIG_DEBUG_MUTEXES is turned on. Semaphores on the other hand have
+ virtually no debugging code or instrumentation. The mutex subsystem
+ checks and enforces the following rules:
+
+ * - only one task can hold the mutex at a time
+ * - only the owner can unlock the mutex
+ * - multiple unlocks are not permitted
+ * - recursive locking is not permitted
+ * - a mutex object must be initialized via the API
+ * - a mutex object must not be initialized via memset or copying
+ * - task may not exit with mutex held
+ * - memory areas where held locks reside must not be freed
+ * - held mutexes must not be reinitialized
+ * - mutexes may not be used in irq contexts
+
+ furthermore, there are also convenience features in the debugging
+ code:
+
+ * - uses symbolic names of mutexes, whenever they are printed in debug output
+ * - point-of-acquire tracking, symbolic lookup of function names
+ * - list of all locks held in the system, printout of them
+ * - owner tracking
+ * - detects self-recursing locks and prints out all relevant info
+ * - detects multi-task circular deadlocks and prints out all affected
+ * locks and tasks (and only those tasks)
+
+Disadvantages
+-------------
+
+The stricter mutex API means you cannot use mutexes the same way you
+can use semaphores: e.g. they cannot be used from an interrupt context,
+nor can they be unlocked from a different context that which acquired
+it. [ I'm not aware of any other (e.g. performance) disadvantages from
+using mutexes at the moment, please let me know if you find any. ]
+
+Implementation of mutexes
+-------------------------
+
+'struct mutex' is the new mutex type, defined in include/linux/mutex.h
+and implemented in kernel/mutex.c. It is a counter-based mutex with a
+spinlock and a wait-list. The counter has 3 states: 1 for "unlocked",
+0 for "locked" and negative numbers (usually -1) for "locked, potential
+waiters queued".
+
+the APIs of 'struct mutex' have been streamlined:
+
+ DEFINE_MUTEX(name);
+
+ mutex_init(mutex);
+
+ void mutex_lock(struct mutex *lock);
+ int mutex_lock_interruptible(struct mutex *lock);
+ int mutex_trylock(struct mutex *lock);
+ void mutex_unlock(struct mutex *lock);
+ int mutex_is_locked(struct mutex *lock);
+
{
if (PageHighMem(page))
return;
+ if (!enable)
+ mutex_debug_check_no_locks_freed(page_address(page),
+ page_address(page+numpages));
+
/* the return value is ignored - the calls cannot fail,
* large pages are disabled at boot time.
*/
static void spufs_prune_dir(struct dentry *dir)
{
struct dentry *dentry, *tmp;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_child) {
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
}
}
shrink_dcache_parent(dir);
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
}
static int spufs_rmdir(struct inode *root, struct dentry *dir_dentry)
struct spu_context *ctx;
/* remove all entries */
- down(&root->i_sem);
+ mutex_lock(&root->i_mutex);
spufs_prune_dir(dir_dentry);
- up(&root->i_sem);
+ mutex_unlock(&root->i_mutex);
/* We have to give up the mm_struct */
ctx = SPUFS_I(dir_dentry->d_inode)->i_ctx;
spu_forget(ctx);
- /* XXX Do we need to hold i_sem here ? */
+ /* XXX Do we need to hold i_mutex here ? */
return simple_rmdir(root, dir_dentry);
}
out_dput:
dput(dentry);
out_dir:
- up(&nd->dentry->d_inode->i_sem);
+ mutex_unlock(&nd->dentry->d_inode->i_mutex);
out:
return ret;
}
unsigned offset, bv_offs;
int len, ret;
- down(&mapping->host->i_sem);
+ mutex_lock(&mapping->host->i_mutex);
index = pos >> PAGE_CACHE_SHIFT;
offset = pos & ((pgoff_t)PAGE_CACHE_SIZE - 1);
bv_offs = bvec->bv_offset;
}
ret = 0;
out:
- up(&mapping->host->i_sem);
+ mutex_unlock(&mapping->host->i_mutex);
return ret;
unlock:
unlock_page(page);
lo->lo_pending++;
loop_add_bio(lo, old_bio);
spin_unlock_irq(&lo->lo_lock);
- up(&lo->lo_bh_mutex);
+ complete(&lo->lo_bh_done);
return 0;
out:
if (lo->lo_pending == 0)
- up(&lo->lo_bh_mutex);
+ complete(&lo->lo_bh_done);
spin_unlock_irq(&lo->lo_lock);
bio_io_error(old_bio, old_bio->bi_size);
return 0;
lo->lo_pending = 1;
/*
- * up sem, we are running
+ * complete it, we are running
*/
- up(&lo->lo_sem);
+ complete(&lo->lo_done);
for (;;) {
int pending;
- /*
- * interruptible just to not contribute to load avg
- */
- if (down_interruptible(&lo->lo_bh_mutex))
+ if (wait_for_completion_interruptible(&lo->lo_bh_done))
continue;
spin_lock_irq(&lo->lo_lock);
/*
- * could be upped because of tear-down, not pending work
+ * could be completed because of tear-down, not pending work
*/
if (unlikely(!lo->lo_pending)) {
spin_unlock_irq(&lo->lo_lock);
break;
}
- up(&lo->lo_sem);
+ complete(&lo->lo_done);
return 0;
}
set_blocksize(bdev, lo_blocksize);
kernel_thread(loop_thread, lo, CLONE_KERNEL);
- down(&lo->lo_sem);
+ wait_for_completion(&lo->lo_done);
return 0;
out_putf:
lo->lo_state = Lo_rundown;
lo->lo_pending--;
if (!lo->lo_pending)
- up(&lo->lo_bh_mutex);
+ complete(&lo->lo_bh_done);
spin_unlock_irq(&lo->lo_lock);
- down(&lo->lo_sem);
+ wait_for_completion(&lo->lo_done);
lo->lo_backing_file = NULL;
if (!lo->lo_queue)
goto out_mem4;
init_MUTEX(&lo->lo_ctl_mutex);
- init_MUTEX_LOCKED(&lo->lo_sem);
- init_MUTEX_LOCKED(&lo->lo_bh_mutex);
+ init_completion(&lo->lo_done);
+ init_completion(&lo->lo_bh_done);
lo->lo_number = i;
spin_lock_init(&lo->lo_lock);
disk->major = LOOP_MAJOR;
#include <linux/time.h>
#include <linux/hdreg.h>
#include <linux/dma-mapping.h>
+#include <linux/completion.h>
#include <asm/io.h>
-#include <asm/semaphore.h>
#include <asm/uaccess.h>
#if 0
struct work_struct fsm_task;
- struct semaphore probe_sem;
+ struct completion probe_comp;
};
struct carm_response {
}
case HST_PROBE_FINISHED:
- up(&host->probe_sem);
+ complete(&host->probe_comp);
break;
case HST_ERROR:
host->flags = pci_dac ? FL_DAC : 0;
spin_lock_init(&host->lock);
INIT_WORK(&host->fsm_task, carm_fsm_task, host);
- init_MUTEX_LOCKED(&host->probe_sem);
+ init_completion(&host->probe_comp);
for (i = 0; i < ARRAY_SIZE(host->req); i++)
host->req[i].tag = i;
if (rc)
goto err_out_free_irq;
- DPRINTK("waiting for probe_sem\n");
- down(&host->probe_sem);
+ DPRINTK("waiting for probe_comp\n");
+ wait_for_completion(&host->probe_comp);
printk(KERN_INFO "%s: pci %s, ports %d, io %lx, irq %u, major %d\n",
host->name, pci_name(pdev), (int) CARM_MAX_PORTS,
{
loff_t ret;
- down(&file->f_dentry->d_inode->i_sem);
+ mutex_lock(&file->f_dentry->d_inode->i_mutex);
switch (orig) {
case 0:
file->f_pos = offset;
default:
ret = -EINVAL;
}
- up(&file->f_dentry->d_inode->i_sem);
+ mutex_unlock(&file->f_dentry->d_inode->i_mutex);
return ret;
}
/* END SYNC SYSRQ HANDLERS BLOCK */
+#ifdef CONFIG_DEBUG_MUTEXES
+
+static void
+sysrq_handle_showlocks(int key, struct pt_regs *pt_regs, struct tty_struct *tty)
+{
+ mutex_debug_show_all_locks();
+}
+
+static struct sysrq_key_op sysrq_showlocks_op = {
+ .handler = sysrq_handle_showlocks,
+ .help_msg = "show-all-locks(D)",
+ .action_msg = "Show Locks Held",
+};
+
+#endif
/* SHOW SYSRQ HANDLERS BLOCK */
#else
/* c */ NULL,
#endif
+#ifdef CONFIG_DEBUG_MUTEXES
+/* d */ &sysrq_showlocks_op,
+#else
/* d */ NULL,
+#endif
/* e */ &sysrq_term_op,
/* f */ &sysrq_moom_op,
/* g */ NULL,
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/timer.h>
+#include <linux/completion.h>
#include <linux/jiffies.h>
#include <asm/io.h>
#include <asm/uaccess.h>
/* some device data */
static struct {
- struct semaphore stop;
+ struct completion stop;
volatile int running;
struct timer_list timer;
volatile int queue;
}
else {
/* ticks doesn't matter anyway */
- up(&cpu5wdt_device.stop);
+ complete(&cpu5wdt_device.stop);
}
}
if ( !val )
printk(KERN_INFO PFX "sorry, was my fault\n");
- init_MUTEX_LOCKED(&cpu5wdt_device.stop);
+ init_completion(&cpu5wdt_device.stop);
cpu5wdt_device.queue = 0;
clear_bit(0, &cpu5wdt_device.inuse);
{
if ( cpu5wdt_device.queue ) {
cpu5wdt_device.queue = 0;
- down(&cpu5wdt_device.stop);
+ wait_for_completion(&cpu5wdt_device.stop);
}
misc_deregister(&cpu5wdt_misc);
{
ide_hwif_t *hwif = container_of(dev, ide_hwif_t, gendev);
- up(&hwif->gendev_rel_sem);
+ complete(&hwif->gendev_rel_comp);
}
static void hwif_register (ide_hwif_t *hwif)
drive->queue = NULL;
spin_unlock_irq(&ide_lock);
- up(&drive->gendev_rel_sem);
+ complete(&drive->gendev_rel_comp);
}
/*
hwif->mwdma_mask = 0x80; /* disable all mwdma */
hwif->swdma_mask = 0x80; /* disable all swdma */
- sema_init(&hwif->gendev_rel_sem, 0);
+ init_completion(&hwif->gendev_rel_comp);
default_hwif_iops(hwif);
default_hwif_transport(hwif);
drive->is_flash = 0;
drive->vdma = 0;
INIT_LIST_HEAD(&drive->list);
- sema_init(&drive->gendev_rel_sem, 0);
+ init_completion(&drive->gendev_rel_comp);
}
}
}
spin_unlock_irq(&ide_lock);
device_unregister(&drive->gendev);
- down(&drive->gendev_rel_sem);
+ wait_for_completion(&drive->gendev_rel_comp);
spin_lock_irq(&ide_lock);
}
hwif->present = 0;
/* More messed up locking ... */
spin_unlock_irq(&ide_lock);
device_unregister(&hwif->gendev);
- down(&hwif->gendev_rel_sem);
+ wait_for_completion(&hwif->gendev_rel_comp);
/*
* Remove us from the kernel's knowledge
{
char s[10];
struct dentry *root = capifs_root;
- down(&root->d_inode->i_sem);
+ mutex_lock(&root->d_inode->i_mutex);
return lookup_one_len(s, root, sprintf(s, "%d", num));
}
dentry = get_node(number);
if (!IS_ERR(dentry) && !dentry->d_inode)
d_instantiate(dentry, inode);
- up(&capifs_root->d_inode->i_sem);
+ mutex_unlock(&capifs_root->d_inode->i_mutex);
}
void capifs_free_ncci(unsigned int number)
}
dput(dentry);
}
- up(&capifs_root->d_inode->i_sem);
+ mutex_unlock(&capifs_root->d_inode->i_mutex);
}
static int __init capifs_init(void)
{
set_capacity(md->disk, size);
- down(&md->suspended_bdev->bd_inode->i_sem);
+ mutex_lock(&md->suspended_bdev->bd_inode->i_mutex);
i_size_write(md->suspended_bdev->bd_inode, (loff_t)size << SECTOR_SHIFT);
- up(&md->suspended_bdev->bd_inode->i_sem);
+ mutex_unlock(&md->suspended_bdev->bd_inode->i_mutex);
}
static int __bind(struct mapped_device *md, struct dm_table *t)
bdev = bdget_disk(mddev->gendisk, 0);
if (bdev) {
- down(&bdev->bd_inode->i_sem);
+ mutex_lock(&bdev->bd_inode->i_mutex);
i_size_write(bdev->bd_inode, mddev->array_size << 10);
- up(&bdev->bd_inode->i_sem);
+ mutex_unlock(&bdev->bd_inode->i_mutex);
bdput(bdev);
}
}
bdev = bdget_disk(mddev->gendisk, 0);
if (bdev) {
- down(&bdev->bd_inode->i_sem);
+ mutex_lock(&bdev->bd_inode->i_mutex);
i_size_write(bdev->bd_inode, mddev->array_size << 10);
- up(&bdev->bd_inode->i_sem);
+ mutex_unlock(&bdev->bd_inode->i_mutex);
bdput(bdev);
}
}
loff_t new = -1;
struct inode *inode = file->f_dentry->d_inode;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
switch (whence) {
case 0:
new = off;
new = -EINVAL;
else
file->f_pos = new;
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return new;
}
bus->d_inode->i_gid = busgid;
bus->d_inode->i_mode = S_IFDIR | busmode;
- down(&bus->d_inode->i_sem);
+ mutex_lock(&bus->d_inode->i_mutex);
list_for_each_entry(dev, &bus->d_subdirs, d_u.d_child)
if (dev->d_inode)
update_dev(dev);
- up(&bus->d_inode->i_sem);
+ mutex_unlock(&bus->d_inode->i_mutex);
}
static void update_sb(struct super_block *sb)
if (!root)
return;
- down(&root->d_inode->i_sem);
+ mutex_lock(&root->d_inode->i_mutex);
list_for_each_entry(bus, &root->d_subdirs, d_u.d_child) {
if (bus->d_inode) {
}
}
- up(&root->d_inode->i_sem);
+ mutex_unlock(&root->d_inode->i_mutex);
}
static int remount(struct super_block *sb, int *flags, char *data)
static int usbfs_unlink (struct inode *dir, struct dentry *dentry)
{
struct inode *inode = dentry->d_inode;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
dentry->d_inode->i_nlink--;
dput(dentry);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
d_delete(dentry);
return 0;
}
int error = -ENOTEMPTY;
struct inode * inode = dentry->d_inode;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
dentry_unhash(dentry);
if (usbfs_empty(dentry)) {
dentry->d_inode->i_nlink -= 2;
dir->i_nlink--;
error = 0;
}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
if (!error)
d_delete(dentry);
dput(dentry);
{
loff_t retval = -EINVAL;
- down(&file->f_dentry->d_inode->i_sem);
+ mutex_lock(&file->f_dentry->d_inode->i_mutex);
switch(orig) {
case 0:
if (offset > 0) {
default:
break;
}
- up(&file->f_dentry->d_inode->i_sem);
+ mutex_unlock(&file->f_dentry->d_inode->i_mutex);
return retval;
}
}
*dentry = NULL;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
*dentry = lookup_one_len(name, parent, strlen(name));
if (!IS_ERR(dentry)) {
if ((mode & S_IFMT) == S_IFDIR)
error = usbfs_create (parent->d_inode, *dentry, mode);
} else
error = PTR_ERR(dentry);
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
return error;
}
if (!parent || !parent->d_inode)
return;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
if (usbfs_positive(dentry)) {
if (dentry->d_inode) {
if (S_ISDIR(dentry->d_inode->i_mode))
dput(dentry);
}
}
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
}
/* --------------------------------------------------------------------- */
return -EINVAL;
inode = filp->f_dentry->d_inode;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
current->flags |= PF_SYNCWRITE;
rc = filemap_fdatawrite(inode->i_mapping);
err = filp->f_op->fsync(filp, filp->f_dentry, 1);
if (!rc)
rc = err;
current->flags &= ~PF_SYNCWRITE;
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
VLDBG(curlun, "fdatasync -> %d\n", rc);
return rc;
}
spin_unlock_irq (&dev->lock);
/* break link to dcache */
- down (&parent->i_sem);
+ mutex_lock (&parent->i_mutex);
d_delete (dentry);
dput (dentry);
- up (&parent->i_sem);
+ mutex_unlock (&parent->i_mutex);
/* fds may still be open */
goto restart;
pr_debug("AFFS: put_inode(ino=%lu, nlink=%u)\n", inode->i_ino, inode->i_nlink);
affs_free_prealloc(inode);
if (atomic_read(&inode->i_count) == 1) {
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
if (inode->i_size != AFFS_I(inode)->mmu_private)
affs_truncate(inode);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
}
}
dentry->d_flags |= DCACHE_AUTOFS_PENDING;
d_add(dentry, NULL);
- up(&dir->i_sem);
+ mutex_unlock(&dir->i_mutex);
autofs_revalidate(dentry, nd);
- down(&dir->i_sem);
+ mutex_lock(&dir->i_mutex);
/*
* If we are still pending, check if we had to handle
d_add(dentry, NULL);
if (dentry->d_op && dentry->d_op->d_revalidate) {
- up(&dir->i_sem);
+ mutex_unlock(&dir->i_mutex);
(dentry->d_op->d_revalidate)(dentry, nd);
- down(&dir->i_sem);
+ mutex_lock(&dir->i_mutex);
}
/*
case 2: set_bit(Enabled, &e->flags);
break;
case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root);
- down(&root->d_inode->i_sem);
+ mutex_lock(&root->d_inode->i_mutex);
kill_node(e);
- up(&root->d_inode->i_sem);
+ mutex_unlock(&root->d_inode->i_mutex);
dput(root);
break;
default: return res;
return PTR_ERR(e);
root = dget(sb->s_root);
- down(&root->d_inode->i_sem);
+ mutex_lock(&root->d_inode->i_mutex);
dentry = lookup_one_len(e->name, root, strlen(e->name));
err = PTR_ERR(dentry);
if (IS_ERR(dentry))
out2:
dput(dentry);
out:
- up(&root->d_inode->i_sem);
+ mutex_unlock(&root->d_inode->i_mutex);
dput(root);
if (err) {
case 1: enabled = 0; break;
case 2: enabled = 1; break;
case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root);
- down(&root->d_inode->i_sem);
+ mutex_lock(&root->d_inode->i_mutex);
while (!list_empty(&entries))
kill_node(list_entry(entries.next, Node, list));
- up(&root->d_inode->i_sem);
+ mutex_unlock(&root->d_inode->i_mutex);
dput(root);
default: return res;
}
loff_t size;
loff_t retval;
- down(&bd_inode->i_sem);
+ mutex_lock(&bd_inode->i_mutex);
size = i_size_read(bd_inode);
switch (origin) {
}
retval = offset;
}
- up(&bd_inode->i_sem);
+ mutex_unlock(&bd_inode->i_mutex);
return retval;
}
* We need to protect against concurrent writers,
* which could cause livelocks in fsync_buffers_list
*/
- down(&mapping->host->i_sem);
+ mutex_lock(&mapping->host->i_mutex);
err = file->f_op->fsync(file, file->f_dentry, datasync);
if (!ret)
ret = err;
- up(&mapping->host->i_sem);
+ mutex_unlock(&mapping->host->i_mutex);
err = filemap_fdatawait(mapping);
if (!ret)
ret = err;
__block_commit_write(inode,page,from,to);
/*
* No need to use i_size_read() here, the i_size
- * cannot change under us because we hold i_sem.
+ * cannot change under us because we hold i_mutex.
*/
if (pos > inode->i_size) {
i_size_write(inode, pos);
DeleteOplockQEntry(oplock_item);
/* can not grab inode sem here since it would
deadlock when oplock received on delete
- since vfs_unlink holds the i_sem across
+ since vfs_unlink holds the i_mutex across
the call */
- /* down(&inode->i_sem);*/
+ /* mutex_lock(&inode->i_mutex);*/
if (S_ISREG(inode->i_mode)) {
rc = filemap_fdatawrite(inode->i_mapping);
if(CIFS_I(inode)->clientCanCacheRead == 0) {
}
} else
rc = 0;
- /* up(&inode->i_sem);*/
+ /* mutex_unlock(&inode->i_mutex);*/
if (rc)
CIFS_I(inode)->write_behind_rc = rc;
cFYI(1,("Oplock flush inode %p rc %d",inode,rc));
}
/* can not grab this sem since kernel filesys locking documentation
- indicates i_sem may be taken by the kernel on lookup and rename
- which could deadlock if we grab the i_sem here as well */
-/* down(&direntry->d_inode->i_sem);*/
+ indicates i_mutex may be taken by the kernel on lookup and rename
+ which could deadlock if we grab the i_mutex here as well */
+/* mutex_lock(&direntry->d_inode->i_mutex);*/
/* need to write out dirty pages here */
if (direntry->d_inode->i_mapping) {
/* do we need to lock inode until after invalidate completes
}
}
}
-/* up(&direntry->d_inode->i_sem); */
+/* mutex_unlock(&direntry->d_inode->i_mutex); */
kfree(full_path);
FreeXid(xid);
coda_vfs_stat.readdir++;
host_inode = host_file->f_dentry->d_inode;
- down(&host_inode->i_sem);
+ mutex_lock(&host_inode->i_mutex);
host_file->f_pos = coda_file->f_pos;
if (!host_file->f_op->readdir) {
}
out:
coda_file->f_pos = host_file->f_pos;
- up(&host_inode->i_sem);
+ mutex_unlock(&host_inode->i_mutex);
return ret;
}
return -EINVAL;
host_inode = host_file->f_dentry->d_inode;
- down(&coda_inode->i_sem);
+ mutex_lock(&coda_inode->i_mutex);
ret = host_file->f_op->write(host_file, buf, count, ppos);
coda_inode->i_size = host_inode->i_size;
coda_inode->i_blocks = (coda_inode->i_size + 511) >> 9;
coda_inode->i_mtime = coda_inode->i_ctime = CURRENT_TIME_SEC;
- up(&coda_inode->i_sem);
+ mutex_unlock(&coda_inode->i_mutex);
return ret;
}
if (host_file->f_op && host_file->f_op->fsync) {
host_dentry = host_file->f_dentry;
host_inode = host_dentry->d_inode;
- down(&host_inode->i_sem);
+ mutex_lock(&host_inode->i_mutex);
err = host_file->f_op->fsync(host_file, host_dentry, datasync);
- up(&host_inode->i_sem);
+ mutex_unlock(&host_inode->i_mutex);
}
if ( !err && !datasync ) {
/*
* Only subdirectories count here. Files (CONFIGFS_NOT_PINNED) are
- * attributes and are removed by rmdir(). We recurse, taking i_sem
+ * attributes and are removed by rmdir(). We recurse, taking i_mutex
* on all children that are candidates for default detach. If the
* result is clean, then configfs_detach_group() will handle dropping
- * i_sem. If there is an error, the caller will clean up the i_sem
+ * i_mutex. If there is an error, the caller will clean up the i_mutex
* holders via configfs_detach_rollback().
*/
static int configfs_detach_prep(struct dentry *dentry)
if (sd->s_type & CONFIGFS_NOT_PINNED)
continue;
if (sd->s_type & CONFIGFS_USET_DEFAULT) {
- down(&sd->s_dentry->d_inode->i_sem);
- /* Mark that we've taken i_sem */
+ mutex_lock(&sd->s_dentry->d_inode->i_mutex);
+ /* Mark that we've taken i_mutex */
sd->s_type |= CONFIGFS_USET_DROPPING;
ret = configfs_detach_prep(sd->s_dentry);
}
/*
- * Walk the tree, dropping i_sem wherever CONFIGFS_USET_DROPPING is
+ * Walk the tree, dropping i_mutex wherever CONFIGFS_USET_DROPPING is
* set.
*/
static void configfs_detach_rollback(struct dentry *dentry)
if (sd->s_type & CONFIGFS_USET_DROPPING) {
sd->s_type &= ~CONFIGFS_USET_DROPPING;
- up(&sd->s_dentry->d_inode->i_sem);
+ mutex_unlock(&sd->s_dentry->d_inode->i_mutex);
}
}
}
/*
* From rmdir/unregister, a configfs_detach_prep() pass
- * has taken our i_sem for us. Drop it.
+ * has taken our i_mutex for us. Drop it.
* From mkdir/register cleanup, there is no sem held.
*/
if (sd->s_type & CONFIGFS_USET_DROPPING)
- up(&child->d_inode->i_sem);
+ mutex_unlock(&child->d_inode->i_mutex);
d_delete(child);
dput(child);
/* FYI, we're faking mkdir here
* I'm not sure we need this semaphore, as we're called
* from our parent's mkdir. That holds our parent's
- * i_sem, so afaik lookup cannot continue through our
+ * i_mutex, so afaik lookup cannot continue through our
* parent to find us, let alone mess with our tree.
- * That said, taking our i_sem is closer to mkdir
+ * That said, taking our i_mutex is closer to mkdir
* emulation, and shouldn't hurt. */
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
for (i = 0; group->default_groups[i]; i++) {
new_group = group->default_groups[i];
break;
}
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
}
if (ret)
down_write(&configfs_rename_sem);
parent = item->parent->dentry;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
new_dentry = lookup_one_len(new_name, parent, strlen(new_name));
if (!IS_ERR(new_dentry)) {
error = -EEXIST;
dput(new_dentry);
}
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
up_write(&configfs_rename_sem);
return error;
struct dentry * dentry = file->f_dentry;
struct configfs_dirent * parent_sd = dentry->d_fsdata;
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
file->private_data = configfs_new_dirent(parent_sd, NULL);
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
return file->private_data ? 0 : -ENOMEM;
struct dentry * dentry = file->f_dentry;
struct configfs_dirent * cursor = file->private_data;
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
list_del_init(&cursor->s_sibling);
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
release_configfs_dirent(cursor);
{
struct dentry * dentry = file->f_dentry;
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
switch (origin) {
case 1:
offset += file->f_pos;
if (offset >= 0)
break;
default:
- up(&file->f_dentry->d_inode->i_sem);
+ mutex_unlock(&file->f_dentry->d_inode->i_mutex);
return -EINVAL;
}
if (offset != file->f_pos) {
list_add_tail(&cursor->s_sibling, p);
}
}
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
return offset;
}
sd = configfs_sb->s_root->d_fsdata;
link_group(to_config_group(sd->s_element), group);
- down(&configfs_sb->s_root->d_inode->i_sem);
+ mutex_lock(&configfs_sb->s_root->d_inode->i_mutex);
name.name = group->cg_item.ci_name;
name.len = strlen(name.name);
else
d_delete(dentry);
- up(&configfs_sb->s_root->d_inode->i_sem);
+ mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex);
if (dentry) {
dput(dentry);
return;
}
- down(&configfs_sb->s_root->d_inode->i_sem);
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&configfs_sb->s_root->d_inode->i_mutex);
+ mutex_lock(&dentry->d_inode->i_mutex);
if (configfs_detach_prep(dentry)) {
printk(KERN_ERR "configfs: Tried to unregister non-empty subsystem!\n");
}
configfs_detach_group(&group->cg_item);
dentry->d_inode->i_flags |= S_DEAD;
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
d_delete(dentry);
- up(&configfs_sb->s_root->d_inode->i_sem);
+ mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex);
dput(dentry);
umode_t mode = (attr->ca_mode & S_IALLUGO) | S_IFREG;
int error = 0;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
error = configfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type);
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
return error;
}
/*
* Unhashes the dentry corresponding to given configfs_dirent
- * Called with parent inode's i_sem held.
+ * Called with parent inode's i_mutex held.
*/
void configfs_drop_dentry(struct configfs_dirent * sd, struct dentry * parent)
{
struct configfs_dirent * sd;
struct configfs_dirent * parent_sd = dir->d_fsdata;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
list_for_each_entry(sd, &parent_sd->s_children, s_sibling) {
if (!sd->s_element)
continue;
break;
}
}
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
}
}
*dentry = NULL;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
*dentry = lookup_one_len(name, parent, strlen(name));
if (!IS_ERR(dentry)) {
if ((mode & S_IFMT) == S_IFDIR)
error = debugfs_create(parent->d_inode, *dentry, mode);
} else
error = PTR_ERR(dentry);
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
return error;
}
if (!parent || !parent->d_inode)
return;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
if (debugfs_positive(dentry)) {
if (dentry->d_inode) {
if (S_ISDIR(dentry->d_inode->i_mode))
dput(dentry);
}
}
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
simple_release_fs(&debugfs_mount, &debugfs_mount_count);
}
EXPORT_SYMBOL_GPL(debugfs_remove);
*
* make sure that
* d_instantiate always runs under lock
- * we release i_sem lock before going to sleep
+ * we release i_mutex lock before going to sleep
*
* unfortunately sometimes d_revalidate is called with
- * and sometimes without i_sem lock held. The following checks
+ * and sometimes without i_mutex lock held. The following checks
* attempt to deduce when we need to add (and drop resp.) lock
* here. This relies on current (2.6.2) calling coventions:
*
- * lookup_hash is always run under i_sem and is passing NULL
+ * lookup_hash is always run under i_mutex and is passing NULL
* as nd
*
- * open(...,O_CREATE,...) calls _lookup_hash under i_sem
+ * open(...,O_CREATE,...) calls _lookup_hash under i_mutex
* and sets flags to LOOKUP_OPEN|LOOKUP_CREATE
*
* all other invocations of ->d_revalidate seem to happen
- * outside of i_sem
+ * outside of i_mutex
*/
need_lock = nd &&
(!(nd->flags & LOOKUP_CREATE) || (nd->flags & LOOKUP_PARENT));
if (need_lock)
- down(&dir->i_sem);
+ mutex_lock(&dir->i_mutex);
if (is_devfsd_or_child(fs_info)) {
devfs_handle_t de = lookup_info->de;
add_wait_queue(&lookup_info->wait_queue, &wait);
read_unlock(&parent->u.dir.lock);
/* at this point it is always (hopefully) locked */
- up(&dir->i_sem);
+ mutex_unlock(&dir->i_mutex);
schedule();
- down(&dir->i_sem);
+ mutex_lock(&dir->i_mutex);
/*
* This does not need nor should remove wait from wait_queue.
* Wait queue head is never reused - nothing is ever added to it
out:
if (need_lock)
- up(&dir->i_sem);
+ mutex_unlock(&dir->i_mutex);
return 1;
} /* End Function devfs_d_revalidate_wait */
/* Unlock directory semaphore, which will release any waiters. They
will get the hashed dentry, and may be forced to wait for
revalidation */
- up(&dir->i_sem);
+ mutex_unlock(&dir->i_mutex);
wait_for_devfsd_finished(fs_info); /* If I'm not devfsd, must wait */
- down(&dir->i_sem); /* Grab it again because them's the rules */
+ mutex_lock(&dir->i_mutex); /* Grab it again because them's the rules */
de = lookup_info.de;
/* If someone else has been so kind as to make the inode, we go home
early */
{
char s[12];
struct dentry *root = devpts_root;
- down(&root->d_inode->i_sem);
+ mutex_lock(&root->d_inode->i_mutex);
return lookup_one_len(s, root, sprintf(s, "%d", num));
}
if (!IS_ERR(dentry) && !dentry->d_inode)
d_instantiate(dentry, inode);
- up(&devpts_root->d_inode->i_sem);
+ mutex_unlock(&devpts_root->d_inode->i_mutex);
return 0;
}
dput(dentry);
}
- up(&devpts_root->d_inode->i_sem);
+ mutex_unlock(&devpts_root->d_inode->i_mutex);
return tty;
}
}
dput(dentry);
}
- up(&devpts_root->d_inode->i_sem);
+ mutex_unlock(&devpts_root->d_inode->i_mutex);
}
static int __init init_devpts_fs(void)
* lock_type is DIO_LOCKING for regular files on direct-IO-naive filesystems.
* This determines whether we need to do the fancy locking which prevents
* direct-IO from being able to read uninitialised disk blocks. If its zero
- * (blockdev) this locking is not done, and if it is DIO_OWN_LOCKING i_sem is
+ * (blockdev) this locking is not done, and if it is DIO_OWN_LOCKING i_mutex is
* not held for the entire direct write (taken briefly, initially, during a
* direct read though, but its never held for the duration of a direct-IO).
*/
}
/*
- * Releases both i_sem and i_alloc_sem
+ * Releases both i_mutex and i_alloc_sem
*/
static ssize_t
direct_io_worker(int rw, struct kiocb *iocb, struct inode *inode,
/*
* All block lookups have been performed. For READ requests
- * we can let i_sem go now that its achieved its purpose
+ * we can let i_mutex go now that its achieved its purpose
* of protecting us from looking up uninitialized blocks.
*/
if ((rw == READ) && (dio->lock_type == DIO_LOCKING))
- up(&dio->inode->i_sem);
+ mutex_unlock(&dio->inode->i_mutex);
/*
* OK, all BIOs are submitted, so we can decrement bio_count to truly
* The locking rules are governed by the dio_lock_type parameter.
*
* DIO_NO_LOCKING (no locking, for raw block device access)
- * For writes, i_sem is not held on entry; it is never taken.
+ * For writes, i_mutex is not held on entry; it is never taken.
*
* DIO_LOCKING (simple locking for regular files)
- * For writes we are called under i_sem and return with i_sem held, even though
+ * For writes we are called under i_mutex and return with i_mutex held, even though
* it is internally dropped.
- * For reads, i_sem is not held on entry, but it is taken and dropped before
+ * For reads, i_mutex is not held on entry, but it is taken and dropped before
* returning.
*
* DIO_OWN_LOCKING (filesystem provides synchronisation and handling of
* uninitialised data, allowing parallel direct readers and writers)
- * For writes we are called without i_sem, return without it, never touch it.
- * For reads, i_sem is held on entry and will be released before returning.
+ * For writes we are called without i_mutex, return without it, never touch it.
+ * For reads, i_mutex is held on entry and will be released before returning.
*
* Additional i_alloc_sem locking requirements described inline below.
*/
* For block device access DIO_NO_LOCKING is used,
* neither readers nor writers do any locking at all
* For regular files using DIO_LOCKING,
- * readers need to grab i_sem and i_alloc_sem
- * writers need to grab i_alloc_sem only (i_sem is already held)
+ * readers need to grab i_mutex and i_alloc_sem
+ * writers need to grab i_alloc_sem only (i_mutex is already held)
* For regular files using DIO_OWN_LOCKING,
* neither readers nor writers take any locks here
- * (i_sem is already held and release for writers here)
+ * (i_mutex is already held and release for writers here)
*/
dio->lock_type = dio_lock_type;
if (dio_lock_type != DIO_NO_LOCKING) {
mapping = iocb->ki_filp->f_mapping;
if (dio_lock_type != DIO_OWN_LOCKING) {
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
reader_with_isem = 1;
}
}
if (dio_lock_type == DIO_OWN_LOCKING) {
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
reader_with_isem = 0;
}
}
out:
if (reader_with_isem)
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
if (rw & WRITE)
current->flags &= ~PF_SYNCWRITE;
return retval;
* operation is just reading pointers from inode (or not using them at all) the
* read lock is enough. If pointers are altered function must hold write lock
* (these locking rules also apply for S_NOQUOTA flag in the inode - note that
- * for altering the flag i_sem is also needed). If operation is holding
+ * for altering the flag i_mutex is also needed). If operation is holding
* reference to dquot in other way (e.g. quotactl ops) it must be guarded by
* dqonoff_sem.
* This locking assures that:
* spinlock to internal buffers before writing.
*
* Lock ordering (including related VFS locks) is the following:
- * i_sem > dqonoff_sem > iprune_sem > journal_lock > dqptr_sem >
+ * i_mutex > dqonoff_sem > iprune_sem > journal_lock > dqptr_sem >
* > dquot->dq_lock > dqio_sem
- * i_sem on quota files is special (it's below dqio_sem)
+ * i_mutex on quota files is special (it's below dqio_sem)
*/
static DEFINE_SPINLOCK(dq_list_lock);
/* If quota was reenabled in the meantime, we have
* nothing to do */
if (!sb_has_quota_enabled(sb, cnt)) {
- down(&toputinode[cnt]->i_sem);
+ mutex_lock(&toputinode[cnt]->i_mutex);
toputinode[cnt]->i_flags &= ~(S_IMMUTABLE |
S_NOATIME | S_NOQUOTA);
truncate_inode_pages(&toputinode[cnt]->i_data, 0);
- up(&toputinode[cnt]->i_sem);
+ mutex_unlock(&toputinode[cnt]->i_mutex);
mark_inode_dirty(toputinode[cnt]);
iput(toputinode[cnt]);
}
write_inode_now(inode, 1);
/* And now flush the block cache so that kernel sees the changes */
invalidate_bdev(sb->s_bdev, 0);
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
down(&dqopt->dqonoff_sem);
if (sb_has_quota_enabled(sb, type)) {
error = -EBUSY;
goto out_file_init;
}
up(&dqopt->dqio_sem);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
set_enable_flags(dqopt, type);
add_dquot_ref(sb, type);
inode->i_flags |= oldflags;
up_write(&dqopt->dqptr_sem);
}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
out_fmt:
put_quota_format(fmt);
struct dentry *ppd;
struct dentry *npd;
- down(&pd->d_inode->i_sem);
+ mutex_lock(&pd->d_inode->i_mutex);
ppd = CALL(nops,get_parent)(pd);
- up(&pd->d_inode->i_sem);
+ mutex_unlock(&pd->d_inode->i_mutex);
if (IS_ERR(ppd)) {
err = PTR_ERR(ppd);
break;
}
dprintk("find_exported_dentry: found name: %s\n", nbuf);
- down(&ppd->d_inode->i_sem);
+ mutex_lock(&ppd->d_inode->i_mutex);
npd = lookup_one_len(nbuf, ppd, strlen(nbuf));
- up(&ppd->d_inode->i_sem);
+ mutex_unlock(&ppd->d_inode->i_mutex);
if (IS_ERR(npd)) {
err = PTR_ERR(npd);
dprintk("find_exported_dentry: lookup failed: %d\n", err);
struct dentry *nresult;
err = CALL(nops,get_name)(target_dir, nbuf, result);
if (!err) {
- down(&target_dir->d_inode->i_sem);
+ mutex_lock(&target_dir->d_inode->i_mutex);
nresult = lookup_one_len(nbuf, target_dir, strlen(nbuf));
- up(&target_dir->d_inode->i_sem);
+ mutex_unlock(&target_dir->d_inode->i_mutex);
if (!IS_ERR(nresult)) {
if (nresult->d_inode) {
dput(result);
}
/*
- * inode->i_sem: don't care
+ * inode->i_mutex: don't care
*/
static struct posix_acl *
ext2_get_acl(struct inode *inode, int type)
}
/*
- * inode->i_sem: down
+ * inode->i_mutex: down
*/
static int
ext2_set_acl(struct inode *inode, int type, struct posix_acl *acl)
/*
* Initialize the ACLs of a new inode. Called from ext2_new_inode.
*
- * dir->i_sem: down
- * inode->i_sem: up (access to inode is still exclusive)
+ * dir->i_mutex: down
+ * inode->i_mutex: up (access to inode is still exclusive)
*/
int
ext2_init_acl(struct inode *inode, struct inode *dir)
* for directories) are added. There are no more bits available in the
* file mode.
*
- * inode->i_sem: down
+ * inode->i_mutex: down
*/
int
ext2_acl_chmod(struct inode *inode)
#ifdef CONFIG_EXT2_FS_XATTR
/*
* Extended attributes can be read independently of the main file
- * data. Taking i_sem even when reading would cause contention
+ * data. Taking i_mutex even when reading would cause contention
* between readers of EAs and writers of regular file data, so
* instead we synchronize on xattr_sem when reading or changing
* EAs.
struct buffer_head tmp_bh;
struct buffer_head *bh;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
inode->i_version++;
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
mark_inode_dirty(inode);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return len - towrite;
}
/*
* Inode operation listxattr()
*
- * dentry->d_inode->i_sem: don't care
+ * dentry->d_inode->i_mutex: don't care
*/
ssize_t
ext2_listxattr(struct dentry *dentry, char *buffer, size_t size)
/*
* Inode operation get_posix_acl().
*
- * inode->i_sem: don't care
+ * inode->i_mutex: don't care
*/
static struct posix_acl *
ext3_get_acl(struct inode *inode, int type)
/*
* Set the access or default ACL of an inode.
*
- * inode->i_sem: down unless called from ext3_new_inode
+ * inode->i_mutex: down unless called from ext3_new_inode
*/
static int
ext3_set_acl(handle_t *handle, struct inode *inode, int type,
/*
* Initialize the ACLs of a new inode. Called from ext3_new_inode.
*
- * dir->i_sem: down
- * inode->i_sem: up (access to inode is still exclusive)
+ * dir->i_mutex: down
+ * inode->i_mutex: up (access to inode is still exclusive)
*/
int
ext3_init_acl(handle_t *handle, struct inode *inode, struct inode *dir)
* for directories) are added. There are no more bits available in the
* file mode.
*
- * inode->i_sem: down
+ * inode->i_mutex: down
*/
int
ext3_acl_chmod(struct inode *inode)
static void ext3_write_super (struct super_block * sb)
{
- if (down_trylock(&sb->s_lock) == 0)
+ if (mutex_trylock(&sb->s_lock) != 0)
BUG();
sb->s_dirt = 0;
}
struct buffer_head *bh;
handle_t *handle = journal_current_handle();
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
inode->i_version++;
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
ext3_mark_inode_dirty(handle, inode);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return len - towrite;
}
/*
* Inode operation listxattr()
*
- * dentry->d_inode->i_sem: don't care
+ * dentry->d_inode->i_mutex: don't care
*/
ssize_t
ext3_listxattr(struct dentry *dentry, char *buffer, size_t size)
buf.dirent = d1;
buf.result = 0;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
ret = -ENOENT;
if (!IS_DEADDIR(inode)) {
ret = __fat_readdir(inode, filp, &buf, fat_ioctl_filldir,
short_only, both);
}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
if (ret >= 0)
ret = buf.result;
return ret;
if (err)
return err;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
if (IS_RDONLY(inode)) {
err = -EROFS;
MSDOS_I(inode)->i_attrs = attr & ATTR_UNUSED;
mark_inode_dirty(inode);
up:
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return err;
}
default:
int ret;
ret = -ERESTARTSYS;
- if (down_interruptible(PIPE_SEM(*inode)))
+ if (mutex_lock_interruptible(PIPE_MUTEX(*inode)))
goto err_nolock_nocleanup;
if (!inode->i_pipe) {
}
/* Ok! */
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
return 0;
err_rd:
free_pipe_info(inode);
err_nocleanup:
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
err_nolock_nocleanup:
return ret;
struct inode *inode = file->f_dentry->d_inode;
ssize_t res;
/* Don't allow parallel writes to the same file */
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
res = fuse_direct_io(file, buf, count, ppos, 1);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return res;
}
if (atomic_read(&file->f_count) != 0)
return 0;
if (atomic_dec_and_test(&HFS_I(inode)->opencnt)) {
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
hfs_file_truncate(inode);
//if (inode->i_flags & S_DEAD) {
// hfs_delete_cat(inode->i_ino, HFSPLUS_SB(sb).hidden_dir, NULL);
// hfs_delete_inode(inode);
//}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
}
return 0;
}
return size;
dprint(DBG_BITMAP, "block_allocate: %u,%u,%u\n", size, offset, len);
- down(&HFSPLUS_SB(sb).alloc_file->i_sem);
+ mutex_lock(&HFSPLUS_SB(sb).alloc_file->i_mutex);
mapping = HFSPLUS_SB(sb).alloc_file->i_mapping;
page = read_cache_page(mapping, offset / PAGE_CACHE_BITS,
(filler_t *)mapping->a_ops->readpage, NULL);
sb->s_dirt = 1;
dprint(DBG_BITMAP, "-> %u,%u\n", start, *max);
out:
- up(&HFSPLUS_SB(sb).alloc_file->i_sem);
+ mutex_unlock(&HFSPLUS_SB(sb).alloc_file->i_mutex);
return start;
}
if ((offset + count) > HFSPLUS_SB(sb).total_blocks)
return -2;
- down(&HFSPLUS_SB(sb).alloc_file->i_sem);
+ mutex_lock(&HFSPLUS_SB(sb).alloc_file->i_mutex);
mapping = HFSPLUS_SB(sb).alloc_file->i_mapping;
pnr = offset / PAGE_CACHE_BITS;
page = read_cache_page(mapping, pnr, (filler_t *)mapping->a_ops->readpage, NULL);
kunmap(page);
HFSPLUS_SB(sb).free_blocks += len;
sb->s_dirt = 1;
- up(&HFSPLUS_SB(sb).alloc_file->i_sem);
+ mutex_unlock(&HFSPLUS_SB(sb).alloc_file->i_mutex);
return 0;
}
if (atomic_read(&file->f_count) != 0)
return 0;
if (atomic_dec_and_test(&HFSPLUS_I(inode).opencnt)) {
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
hfsplus_file_truncate(inode);
if (inode->i_flags & S_DEAD) {
hfsplus_delete_cat(inode->i_ino, HFSPLUS_SB(sb).hidden_dir, NULL);
hfsplus_delete_inode(inode);
}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
}
return 0;
}
/*printk("dir lseek\n");*/
if (new_off == 0 || new_off == 1 || new_off == 11 || new_off == 12 || new_off == 13) goto ok;
- down(&i->i_sem);
+ mutex_lock(&i->i_mutex);
pos = ((loff_t) hpfs_de_as_down_as_possible(s, hpfs_inode->i_dno) << 4) + 1;
while (pos != new_off) {
if (map_pos_dirent(i, &pos, &qbh)) hpfs_brelse4(&qbh);
else goto fail;
if (pos == 12) goto fail;
}
- up(&i->i_sem);
+ mutex_unlock(&i->i_mutex);
ok:
unlock_kernel();
return filp->f_pos = new_off;
fail:
- up(&i->i_sem);
+ mutex_unlock(&i->i_mutex);
/*printk("illegal lseek: %016llx\n", new_off);*/
unlock_kernel();
return -ESPIPE;
err = -ENOMEM;
parent = HPPFS_I(ino)->proc_dentry;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
proc_dentry = d_lookup(parent, &dentry->d_name);
if(proc_dentry == NULL){
proc_dentry = d_alloc(parent, &dentry->d_name);
if(proc_dentry == NULL){
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
goto out;
}
new = (*parent->d_inode->i_op->lookup)(parent->d_inode,
proc_dentry = new;
}
}
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
if(IS_ERR(proc_dentry))
return(proc_dentry);
vma_len = (loff_t)(vma->vm_end - vma->vm_start);
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
file_accessed(file);
vma->vm_flags |= VM_HUGETLB | VM_RESERVED;
vma->vm_ops = &hugetlb_vm_ops;
if (inode->i_size < len)
inode->i_size = len;
out:
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return ret;
}
INIT_HLIST_NODE(&inode->i_hash);
INIT_LIST_HEAD(&inode->i_dentry);
INIT_LIST_HEAD(&inode->i_devices);
- sema_init(&inode->i_sem, 1);
+ mutex_init(&inode->i_mutex);
init_rwsem(&inode->i_alloc_sem);
INIT_RADIX_TREE(&inode->i_data.page_tree, GFP_ATOMIC);
rwlock_init(&inode->i_data.tree_lock);
* This will never trigger with sane page sizes. leave it in
* anyway, since I'm thinking about how to merge larger writes
* (the current idea is to poke a thread that does the actual
- * I/O and starts by doing a down(&inode->i_sem). then we
+ * I/O and starts by doing a mutex_lock(&inode->i_mutex). then we
* would need to get the page cache pages and have a list of
* I/O requests and do write-merging here.
* -- prumpf
/*
* rdwrlock serializes xtree between reads & writes and synchronizes
* changes to special inodes. It's use would be redundant on
- * directories since the i_sem taken in the VFS is sufficient.
+ * directories since the i_mutex taken in the VFS is sufficient.
*/
struct rw_semaphore rdwrlock;
/*
* inode is blocked in txBegin or TxBeginAnon
*/
struct semaphore commit_sem;
- /* xattr_sem allows us to access the xattrs without taking i_sem */
+ /* xattr_sem allows us to access the xattrs without taking i_mutex */
struct rw_semaphore xattr_sem;
lid_t xtlid; /* lid of xtree lock on directory */
#ifdef CONFIG_JFS_POSIX_ACL
loff_t dcache_dir_lseek(struct file *file, loff_t offset, int origin)
{
- down(&file->f_dentry->d_inode->i_sem);
+ mutex_lock(&file->f_dentry->d_inode->i_mutex);
switch (origin) {
case 1:
offset += file->f_pos;
if (offset >= 0)
break;
default:
- up(&file->f_dentry->d_inode->i_sem);
+ mutex_unlock(&file->f_dentry->d_inode->i_mutex);
return -EINVAL;
}
if (offset != file->f_pos) {
spin_unlock(&dcache_lock);
}
}
- up(&file->f_dentry->d_inode->i_sem);
+ mutex_unlock(&file->f_dentry->d_inode->i_mutex);
return offset;
}
/*
* No need to use i_size_read() here, the i_size
- * cannot change under us because we hold the i_sem.
+ * cannot change under us because we hold the i_mutex.
*/
if (pos > inode->i_size)
i_size_write(inode, pos);
struct dentry * result;
struct inode *dir = parent->d_inode;
- down(&dir->i_sem);
+ mutex_lock(&dir->i_mutex);
/*
* First re-do the cached lookup just in case it was created
* while we waited for the directory semaphore..
else
result = dentry;
}
- up(&dir->i_sem);
+ mutex_unlock(&dir->i_mutex);
return result;
}
* Uhhuh! Nasty case: the cache was re-populated while
* we waited on the semaphore. Need to revalidate.
*/
- up(&dir->i_sem);
+ mutex_unlock(&dir->i_mutex);
if (result->d_op && result->d_op->d_revalidate) {
if (!result->d_op->d_revalidate(result, nd) && !d_invalidate(result)) {
dput(result);
struct dentry *p;
if (p1 == p2) {
- down(&p1->d_inode->i_sem);
+ mutex_lock(&p1->d_inode->i_mutex);
return NULL;
}
for (p = p1; p->d_parent != p; p = p->d_parent) {
if (p->d_parent == p2) {
- down(&p2->d_inode->i_sem);
- down(&p1->d_inode->i_sem);
+ mutex_lock(&p2->d_inode->i_mutex);
+ mutex_lock(&p1->d_inode->i_mutex);
return p;
}
}
for (p = p2; p->d_parent != p; p = p->d_parent) {
if (p->d_parent == p1) {
- down(&p1->d_inode->i_sem);
- down(&p2->d_inode->i_sem);
+ mutex_lock(&p1->d_inode->i_mutex);
+ mutex_lock(&p2->d_inode->i_mutex);
return p;
}
}
- down(&p1->d_inode->i_sem);
- down(&p2->d_inode->i_sem);
+ mutex_lock(&p1->d_inode->i_mutex);
+ mutex_lock(&p2->d_inode->i_mutex);
return NULL;
}
void unlock_rename(struct dentry *p1, struct dentry *p2)
{
- up(&p1->d_inode->i_sem);
+ mutex_unlock(&p1->d_inode->i_mutex);
if (p1 != p2) {
- up(&p2->d_inode->i_sem);
+ mutex_unlock(&p2->d_inode->i_mutex);
up(&p1->d_inode->i_sb->s_vfs_rename_sem);
}
}
dir = nd->dentry;
nd->flags &= ~LOOKUP_PARENT;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
path.dentry = lookup_hash(nd);
path.mnt = nd->mnt;
do_last:
error = PTR_ERR(path.dentry);
if (IS_ERR(path.dentry)) {
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
goto exit;
}
if (!IS_POSIXACL(dir->d_inode))
mode &= ~current->fs->umask;
error = vfs_create(dir->d_inode, path.dentry, mode, nd);
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
dput(nd->dentry);
nd->dentry = path.dentry;
if (error)
/*
* It already exists.
*/
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
error = -EEXIST;
if (flag & O_EXCL)
goto exit;
}
dir = nd->dentry;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
path.dentry = lookup_hash(nd);
path.mnt = nd->mnt;
__putname(nd->last.name);
* Simple function to lookup and return a dentry and create it
* if it doesn't exist. Is SMP-safe.
*
- * Returns with nd->dentry->d_inode->i_sem locked.
+ * Returns with nd->dentry->d_inode->i_mutex locked.
*/
struct dentry *lookup_create(struct nameidata *nd, int is_dir)
{
struct dentry *dentry = ERR_PTR(-EEXIST);
- down(&nd->dentry->d_inode->i_sem);
+ mutex_lock(&nd->dentry->d_inode->i_mutex);
/*
* Yucky last component or no last component at all?
* (foo/., foo/.., /////)
}
dput(dentry);
}
- up(&nd.dentry->d_inode->i_sem);
+ mutex_unlock(&nd.dentry->d_inode->i_mutex);
path_release(&nd);
out:
putname(tmp);
error = vfs_mkdir(nd.dentry->d_inode, dentry, mode);
dput(dentry);
}
- up(&nd.dentry->d_inode->i_sem);
+ mutex_unlock(&nd.dentry->d_inode->i_mutex);
path_release(&nd);
out:
putname(tmp);
DQUOT_INIT(dir);
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
dentry_unhash(dentry);
if (d_mountpoint(dentry))
error = -EBUSY;
dentry->d_inode->i_flags |= S_DEAD;
}
}
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
if (!error) {
d_delete(dentry);
}
error = -EBUSY;
goto exit1;
}
- down(&nd.dentry->d_inode->i_sem);
+ mutex_lock(&nd.dentry->d_inode->i_mutex);
dentry = lookup_hash(&nd);
error = PTR_ERR(dentry);
if (!IS_ERR(dentry)) {
error = vfs_rmdir(nd.dentry->d_inode, dentry);
dput(dentry);
}
- up(&nd.dentry->d_inode->i_sem);
+ mutex_unlock(&nd.dentry->d_inode->i_mutex);
exit1:
path_release(&nd);
exit:
DQUOT_INIT(dir);
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
if (d_mountpoint(dentry))
error = -EBUSY;
else {
if (!error)
error = dir->i_op->unlink(dir, dentry);
}
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
/* We don't d_delete() NFS sillyrenamed files--they still exist. */
if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) {
/*
* Make sure that the actual truncation of the file will occur outside its
- * directory's i_sem. Truncate can take a long time if there is a lot of
+ * directory's i_mutex. Truncate can take a long time if there is a lot of
* writeout happening, and we don't want to prevent access to the directory
* while waiting on the I/O.
*/
error = -EISDIR;
if (nd.last_type != LAST_NORM)
goto exit1;
- down(&nd.dentry->d_inode->i_sem);
+ mutex_lock(&nd.dentry->d_inode->i_mutex);
dentry = lookup_hash(&nd);
error = PTR_ERR(dentry);
if (!IS_ERR(dentry)) {
exit2:
dput(dentry);
}
- up(&nd.dentry->d_inode->i_sem);
+ mutex_unlock(&nd.dentry->d_inode->i_mutex);
if (inode)
iput(inode); /* truncate the inode here */
exit1:
error = vfs_symlink(nd.dentry->d_inode, dentry, from, S_IALLUGO);
dput(dentry);
}
- up(&nd.dentry->d_inode->i_sem);
+ mutex_unlock(&nd.dentry->d_inode->i_mutex);
path_release(&nd);
out:
putname(to);
if (error)
return error;
- down(&old_dentry->d_inode->i_sem);
+ mutex_lock(&old_dentry->d_inode->i_mutex);
DQUOT_INIT(dir);
error = dir->i_op->link(old_dentry, dir, new_dentry);
- up(&old_dentry->d_inode->i_sem);
+ mutex_unlock(&old_dentry->d_inode->i_mutex);
if (!error)
fsnotify_create(dir, new_dentry->d_name.name);
return error;
error = vfs_link(old_nd.dentry, nd.dentry->d_inode, new_dentry);
dput(new_dentry);
}
- up(&nd.dentry->d_inode->i_sem);
+ mutex_unlock(&nd.dentry->d_inode->i_mutex);
out_release:
path_release(&nd);
out:
* sb->s_vfs_rename_sem. We might be more accurate, but that's another
* story.
* c) we have to lock _three_ objects - parents and victim (if it exists).
- * And that - after we got ->i_sem on parents (until then we don't know
+ * And that - after we got ->i_mutex on parents (until then we don't know
* whether the target exists). Solution: try to be smart with locking
* order for inodes. We rely on the fact that tree topology may change
* only under ->s_vfs_rename_sem _and_ that parent of the object we
* stuff into VFS), but the former is not going away. Solution: the same
* trick as in rmdir().
* e) conversion from fhandle to dentry may come in the wrong moment - when
- * we are removing the target. Solution: we will have to grab ->i_sem
+ * we are removing the target. Solution: we will have to grab ->i_mutex
* in the fhandle_to_dentry code. [FIXME - current nfsfh.c relies on
- * ->i_sem on parents, which works but leads to some truely excessive
+ * ->i_mutex on parents, which works but leads to some truely excessive
* locking].
*/
static int vfs_rename_dir(struct inode *old_dir, struct dentry *old_dentry,
target = new_dentry->d_inode;
if (target) {
- down(&target->i_sem);
+ mutex_lock(&target->i_mutex);
dentry_unhash(new_dentry);
}
if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry))
if (target) {
if (!error)
target->i_flags |= S_DEAD;
- up(&target->i_sem);
+ mutex_unlock(&target->i_mutex);
if (d_unhashed(new_dentry))
d_rehash(new_dentry);
dput(new_dentry);
dget(new_dentry);
target = new_dentry->d_inode;
if (target)
- down(&target->i_sem);
+ mutex_lock(&target->i_mutex);
if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry))
error = -EBUSY;
else
d_move(old_dentry, new_dentry);
}
if (target)
- up(&target->i_sem);
+ mutex_unlock(&target->i_mutex);
dput(new_dentry);
return error;
}
return -ENOTDIR;
err = -ENOENT;
- down(&nd->dentry->d_inode->i_sem);
+ mutex_lock(&nd->dentry->d_inode->i_mutex);
if (IS_DEADDIR(nd->dentry->d_inode))
goto out_unlock;
if (IS_ROOT(nd->dentry) || !d_unhashed(nd->dentry))
err = attach_recursive_mnt(mnt, nd, NULL);
out_unlock:
- up(&nd->dentry->d_inode->i_sem);
+ mutex_unlock(&nd->dentry->d_inode->i_mutex);
if (!err)
security_sb_post_addmount(mnt, nd);
return err;
goto out;
err = -ENOENT;
- down(&nd->dentry->d_inode->i_sem);
+ mutex_lock(&nd->dentry->d_inode->i_mutex);
if (IS_DEADDIR(nd->dentry->d_inode))
goto out1;
list_del_init(&old_nd.mnt->mnt_expire);
spin_unlock(&vfsmount_lock);
out1:
- up(&nd->dentry->d_inode->i_sem);
+ mutex_unlock(&nd->dentry->d_inode->i_mutex);
out:
up_write(&namespace_sem);
if (!err)
user_nd.dentry = dget(current->fs->root);
read_unlock(¤t->fs->lock);
down_write(&namespace_sem);
- down(&old_nd.dentry->d_inode->i_sem);
+ mutex_lock(&old_nd.dentry->d_inode->i_mutex);
error = -EINVAL;
if (IS_MNT_SHARED(old_nd.mnt) ||
IS_MNT_SHARED(new_nd.mnt->mnt_parent) ||
path_release(&root_parent);
path_release(&parent_nd);
out2:
- up(&old_nd.dentry->d_inode->i_sem);
+ mutex_unlock(&old_nd.dentry->d_inode->i_mutex);
up_write(&namespace_sem);
path_release(&user_nd);
path_release(&old_nd);
spin_unlock(&inode->i_lock);
/* Ensure consistent page alignment of the data.
* Note: assumes we have exclusive access to this mapping either
- * through inode->i_sem or some other mechanism.
+ * through inode->i_mutex or some other mechanism.
*/
if (page->index == 0)
invalidate_inode_pages2_range(inode->i_mapping, PAGE_CACHE_SIZE, -1);
loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int origin)
{
- down(&filp->f_dentry->d_inode->i_sem);
+ mutex_lock(&filp->f_dentry->d_inode->i_mutex);
switch (origin) {
case 1:
offset += filp->f_pos;
((struct nfs_open_context *)filp->private_data)->dir_cookie = 0;
}
out:
- up(&filp->f_dentry->d_inode->i_sem);
+ mutex_unlock(&filp->f_dentry->d_inode->i_mutex);
return offset;
}
openflags &= ~(O_CREAT|O_TRUNC);
/*
- * Note: we're not holding inode->i_sem and so may be racing with
+ * Note: we're not holding inode->i_mutex and so may be racing with
* operations that change the directory. We therefore save the
* change attribute *before* we do the RPC call.
*/
return dentry;
if (!desc->plus || !(entry->fattr->valid & NFS_ATTR_FATTR))
return NULL;
- /* Note: caller is already holding the dir->i_sem! */
+ /* Note: caller is already holding the dir->i_mutex! */
dentry = d_alloc(parent, &name);
if (dentry == NULL)
return NULL;
static void
nfsd4_sync_rec_dir(void)
{
- down(&rec_dir.dentry->d_inode->i_sem);
+ mutex_lock(&rec_dir.dentry->d_inode->i_mutex);
nfsd_sync_dir(rec_dir.dentry);
- up(&rec_dir.dentry->d_inode->i_sem);
+ mutex_unlock(&rec_dir.dentry->d_inode->i_mutex);
}
int
nfs4_save_user(&uid, &gid);
/* lock the parent */
- down(&rec_dir.dentry->d_inode->i_sem);
+ mutex_lock(&rec_dir.dentry->d_inode->i_mutex);
dentry = lookup_one_len(dname, rec_dir.dentry, HEXDIR_LEN-1);
if (IS_ERR(dentry)) {
out_put:
dput(dentry);
out_unlock:
- up(&rec_dir.dentry->d_inode->i_sem);
+ mutex_unlock(&rec_dir.dentry->d_inode->i_mutex);
if (status == 0) {
clp->cl_firststate = 1;
nfsd4_sync_rec_dir();
printk("nfsd4: non-file found in client recovery directory\n");
return -EINVAL;
}
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
status = vfs_unlink(dir->d_inode, dentry);
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
return status;
}
* any regular files anyway, just in case the directory was created by
* a kernel from the future.... */
nfsd4_list_rec_dir(dentry, nfsd4_remove_clid_file);
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
status = vfs_rmdir(dir->d_inode, dentry);
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
return status;
}
dprintk("NFSD: nfsd4_unlink_clid_dir. name %.*s\n", namlen, name);
- down(&rec_dir.dentry->d_inode->i_sem);
+ mutex_lock(&rec_dir.dentry->d_inode->i_mutex);
dentry = lookup_one_len(name, rec_dir.dentry, namlen);
- up(&rec_dir.dentry->d_inode->i_sem);
+ mutex_unlock(&rec_dir.dentry->d_inode->i_mutex);
if (IS_ERR(dentry)) {
status = PTR_ERR(dentry);
return status;
error = -EOPNOTSUPP;
if (inode->i_op && inode->i_op->setxattr) {
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
security_inode_setxattr(dentry, key, buf, len, 0);
error = inode->i_op->setxattr(dentry, key, buf, len, 0);
if (!error)
security_inode_post_setxattr(dentry, key, buf, len, 0);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
}
out:
kfree(buf);
int err;
struct inode *inode = filp->f_dentry->d_inode;
dprintk("nfsd: sync file %s\n", filp->f_dentry->d_name.name);
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
err=nfsd_dosync(filp, filp->f_dentry, filp->f_op);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return err;
}
struct iattr ia;
ia.ia_valid = ATTR_KILL_SUID | ATTR_KILL_SGID;
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
notify_change(dentry, &ia);
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
}
static inline int
* NOTE to self: No changes in the attribute list are required to move from
* a resident to a non-resident attribute.
*
- * Locking: - The caller must hold i_sem on the inode.
+ * Locking: - The caller must hold i_mutex on the inode.
*/
int ntfs_attr_make_non_resident(ntfs_inode *ni, const u32 data_size)
{
/*
* This needs to be last since the address space operations ->readpage
* and ->writepage can run concurrently with us as they are not
- * serialized on i_sem. Note, we are not allowed to fail once we flip
+ * serialized on i_mutex. Note, we are not allowed to fail once we flip
* this switch, which is another reason to do this last.
*/
NInoSetNonResident(ni);
* work but we don't care for how quickly one can access them. This also fixes
* the dcache aliasing issues.
*
- * Locking: - Caller must hold i_sem on the directory.
+ * Locking: - Caller must hold i_mutex on the directory.
* - Each page cache page in the index allocation mapping must be
* locked whilst being accessed otherwise we may find a corrupt
* page due to it being under ->writepage at the moment which
* While this will return the names in random order this doesn't matter for
* ->readdir but OTOH results in a faster ->readdir.
*
- * VFS calls ->readdir without BKL but with i_sem held. This protects the VFS
+ * VFS calls ->readdir without BKL but with i_mutex held. This protects the VFS
* parts (e.g. ->f_pos and ->i_size, and it also protects against directory
* modifications).
*
- * Locking: - Caller must hold i_sem on the directory.
+ * Locking: - Caller must hold i_mutex on the directory.
* - Each page cache page in the index allocation mapping must be
* locked whilst being accessed otherwise we may find a corrupt
* page due to it being under ->writepage at the moment which
* Note: In the past @filp could be NULL so we ignore it as we don't need it
* anyway.
*
- * Locking: Caller must hold i_sem on the inode.
+ * Locking: Caller must hold i_mutex on the inode.
*
* TODO: We should probably also write all attribute/index inodes associated
* with this inode but since we have no simple way of getting to them we ignore
* this is the case, the necessary zeroing will also have happened and that all
* metadata is self-consistent.
*
- * Locking: i_sem on the vfs inode corrseponsind to the ntfs inode @ni must be
+ * Locking: i_mutex on the vfs inode corrseponsind to the ntfs inode @ni must be
* held by the caller.
*/
static int ntfs_attr_extend_initialized(ntfs_inode *ni, const s64 new_init_size,
* @bytes: number of bytes to be written
*
* This is called for non-resident attributes from ntfs_file_buffered_write()
- * with i_sem held on the inode (@pages[0]->mapping->host). There are
+ * with i_mutex held on the inode (@pages[0]->mapping->host). There are
* @nr_pages pages in @pages which are locked but not kmap()ped. The source
* data has not yet been copied into the @pages.
*
* @pos: byte position in file at which the write begins
* @bytes: number of bytes to be written
*
- * This is called from ntfs_file_buffered_write() with i_sem held on the inode
+ * This is called from ntfs_file_buffered_write() with i_mutex held on the inode
* (@pages[0]->mapping->host). There are @nr_pages pages in @pages which are
* locked but not kmap()ped. The source data has already been copied into the
* @page. ntfs_prepare_pages_for_non_resident_write() has been called before
/**
* ntfs_file_buffered_write -
*
- * Locking: The vfs is holding ->i_sem on the inode.
+ * Locking: The vfs is holding ->i_mutex on the inode.
*/
static ssize_t ntfs_file_buffered_write(struct kiocb *iocb,
const struct iovec *iov, unsigned long nr_segs,
BUG_ON(iocb->ki_pos != pos);
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
ret = ntfs_file_aio_write_nolock(iocb, &local_iov, 1, &iocb->ki_pos);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) {
int err = sync_page_range(inode, mapping, pos, ret);
if (err < 0)
struct kiocb kiocb;
ssize_t ret;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
init_sync_kiocb(&kiocb, file);
ret = ntfs_file_aio_write_nolock(&kiocb, iov, nr_segs, ppos);
if (ret == -EIOCBQUEUED)
ret = wait_on_sync_kiocb(&kiocb);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
if (ret > 0 && ((file->f_flags & O_SYNC) || IS_SYNC(inode))) {
int err = sync_page_range(inode, mapping, *ppos - ret, ret);
if (err < 0)
* Note: In the past @filp could be NULL so we ignore it as we don't need it
* anyway.
*
- * Locking: Caller must hold i_sem on the inode.
+ * Locking: Caller must hold i_mutex on the inode.
*
* TODO: We should probably also write all attribute/index inodes associated
* with this inode but since we have no simple way of getting to them we ignore
* Allocate a new index context, initialize it with @idx_ni and return it.
* Return NULL if allocation failed.
*
- * Locking: Caller must hold i_sem on the index inode.
+ * Locking: Caller must hold i_mutex on the index inode.
*/
ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni)
{
*
* Release the index context @ictx, releasing all associated resources.
*
- * Locking: Caller must hold i_sem on the index inode.
+ * Locking: Caller must hold i_mutex on the index inode.
*/
void ntfs_index_ctx_put(ntfs_index_context *ictx)
{
* or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to
* ensure that the changes are written to disk.
*
- * Locking: - Caller must hold i_sem on the index inode.
+ * Locking: - Caller must hold i_mutex on the index inode.
* - Each page cache page in the index allocation mapping must be
* locked whilst being accessed otherwise we may find a corrupt
* page due to it being under ->writepage at the moment which
ntfs_inode *ni = NTFS_I(vi);
if (NInoIndexAllocPresent(ni)) {
struct inode *bvi = NULL;
- down(&vi->i_sem);
+ mutex_lock(&vi->i_mutex);
if (atomic_read(&vi->i_count) == 2) {
bvi = ni->itype.index.bmp_ino;
if (bvi)
ni->itype.index.bmp_ino = NULL;
}
- up(&vi->i_sem);
+ mutex_unlock(&vi->i_mutex);
if (bvi)
iput(bvi);
}
*
* Returns 0 on success or -errno on error.
*
- * Called with ->i_sem held. In all but one case ->i_alloc_sem is held for
+ * Called with ->i_mutex held. In all but one case ->i_alloc_sem is held for
* writing. The only case in the kernel where ->i_alloc_sem is not held is
* mm/filemap.c::generic_file_buffered_write() where vmtruncate() is called
* with the current i_size as the offset. The analogous place in NTFS is in
* We also abort all changes of user, group, and mode as we do not implement
* the NTFS ACLs yet.
*
- * Called with ->i_sem held. For the ATTR_SIZE (i.e. ->truncate) case, also
+ * Called with ->i_mutex held. For the ATTR_SIZE (i.e. ->truncate) case, also
* called with ->i_alloc_sem held for writing.
*
* Basically this is a copy of generic notify_change() and inode_setattr()
* name. We then convert the name to the current NLS code page, and proceed
* searching for a dentry with this name, etc, as in case 2), above.
*
- * Locking: Caller must hold i_sem on the directory.
+ * Locking: Caller must hold i_mutex on the directory.
*/
static struct dentry *ntfs_lookup(struct inode *dir_ino, struct dentry *dent,
struct nameidata *nd)
nls_name.hash = full_name_hash(nls_name.name, nls_name.len);
/*
- * Note: No need for dent->d_lock lock as i_sem is held on the
+ * Note: No need for dent->d_lock lock as i_mutex is held on the
* parent inode.
*/
* The code is based on the ext3 ->get_parent() implementation found in
* fs/ext3/namei.c::ext3_get_parent().
*
- * Note: ntfs_get_parent() is called with @child_dent->d_inode->i_sem down.
+ * Note: ntfs_get_parent() is called with @child_dent->d_inode->i_mutex down.
*
* Return the dentry of the parent directory on success or the error code on
* error (IS_ERR() is true).
ntfs_error(vol->sb, "Quota inodes are not open.");
return FALSE;
}
- down(&vol->quota_q_ino->i_sem);
+ mutex_lock(&vol->quota_q_ino->i_mutex);
ictx = ntfs_index_ctx_get(NTFS_I(vol->quota_q_ino));
if (!ictx) {
ntfs_error(vol->sb, "Failed to get index context.");
ntfs_index_entry_mark_dirty(ictx);
set_done:
ntfs_index_ctx_put(ictx);
- up(&vol->quota_q_ino->i_sem);
+ mutex_unlock(&vol->quota_q_ino->i_mutex);
/*
* We set the flag so we do not try to mark the quotas out of date
* again on remount.
err_out:
if (ictx)
ntfs_index_ctx_put(ictx);
- up(&vol->quota_q_ino->i_sem);
+ mutex_unlock(&vol->quota_q_ino->i_mutex);
return FALSE;
}
* Find the inode number for the hibernation file by looking up the
* filename hiberfil.sys in the root directory.
*/
- down(&vol->root_ino->i_sem);
+ mutex_lock(&vol->root_ino->i_mutex);
mref = ntfs_lookup_inode_by_name(NTFS_I(vol->root_ino), hiberfil, 12,
&name);
- up(&vol->root_ino->i_sem);
+ mutex_unlock(&vol->root_ino->i_mutex);
if (IS_ERR_MREF(mref)) {
ret = MREF_ERR(mref);
/* If the file does not exist, Windows is not hibernated. */
* Find the inode number for the quota file by looking up the filename
* $Quota in the extended system files directory $Extend.
*/
- down(&vol->extend_ino->i_sem);
+ mutex_lock(&vol->extend_ino->i_mutex);
mref = ntfs_lookup_inode_by_name(NTFS_I(vol->extend_ino), Quota, 6,
&name);
- up(&vol->extend_ino->i_sem);
+ mutex_unlock(&vol->extend_ino->i_mutex);
if (IS_ERR_MREF(mref)) {
/*
* If the file does not exist, quotas are disabled and have
* Find the inode number for the transaction log file by looking up the
* filename $UsnJrnl in the extended system files directory $Extend.
*/
- down(&vol->extend_ino->i_sem);
+ mutex_lock(&vol->extend_ino->i_mutex);
mref = ntfs_lookup_inode_by_name(NTFS_I(vol->extend_ino), UsnJrnl, 8,
&name);
- up(&vol->extend_ino->i_sem);
+ mutex_unlock(&vol->extend_ino->i_mutex);
if (IS_ERR_MREF(mref)) {
/*
* If the file does not exist, transaction logging is disabled,
if (!list_empty(&sb->s_dirty)) {
const char *s1, *s2;
- down(&vol->mft_ino->i_sem);
+ mutex_lock(&vol->mft_ino->i_mutex);
truncate_inode_pages(vol->mft_ino->i_mapping, 0);
- up(&vol->mft_ino->i_sem);
+ mutex_unlock(&vol->mft_ino->i_mutex);
write_inode_now(vol->mft_ino, 1);
if (!list_empty(&sb->s_dirty)) {
static const char *_s1 = "inodes";
mlog_entry("start_blk = %"MLFu64", num_clusters = %u\n", start_blk,
num_clusters);
- BUG_ON(!down_trylock(&tl_inode->i_sem));
+ BUG_ON(mutex_trylock(&tl_inode->i_mutex));
start_cluster = ocfs2_blocks_to_clusters(osb->sb, start_blk);
return status;
}
-/* Expects you to already be holding tl_inode->i_sem */
+/* Expects you to already be holding tl_inode->i_mutex */
static int __ocfs2_flush_truncate_log(struct ocfs2_super *osb)
{
int status;
mlog_entry_void();
- BUG_ON(!down_trylock(&tl_inode->i_sem));
+ BUG_ON(mutex_trylock(&tl_inode->i_mutex));
di = (struct ocfs2_dinode *) tl_bh->b_data;
tl = &di->id2.i_dealloc;
int status;
struct inode *tl_inode = osb->osb_tl_inode;
- down(&tl_inode->i_sem);
+ mutex_lock(&tl_inode->i_mutex);
status = __ocfs2_flush_truncate_log(osb);
- up(&tl_inode->i_sem);
+ mutex_unlock(&tl_inode->i_mutex);
return status;
}
mlog(0, "cleanup %u records from %"MLFu64"\n", num_recs,
tl_copy->i_blkno);
- down(&tl_inode->i_sem);
+ mutex_lock(&tl_inode->i_mutex);
for(i = 0; i < num_recs; i++) {
if (ocfs2_truncate_log_needs_flush(osb)) {
status = __ocfs2_flush_truncate_log(osb);
}
bail_up:
- up(&tl_inode->i_sem);
+ mutex_unlock(&tl_inode->i_mutex);
mlog_exit(status);
return status;
mlog(0, "clusters_to_del = %u in this pass\n", clusters_to_del);
- down(&tl_inode->i_sem);
+ mutex_lock(&tl_inode->i_mutex);
tl_sem = 1;
/* ocfs2_truncate_log_needs_flush guarantees us at least one
* record is free for use. If there isn't any, we flush to get
goto bail;
}
- up(&tl_inode->i_sem);
+ mutex_unlock(&tl_inode->i_mutex);
tl_sem = 0;
ocfs2_commit_trans(handle);
ocfs2_schedule_truncate_log_flush(osb, 1);
if (tl_sem)
- up(&tl_inode->i_sem);
+ mutex_unlock(&tl_inode->i_mutex);
if (handle)
ocfs2_commit_trans(handle);
goto bail;
}
- down(&ext_alloc_inode->i_sem);
+ mutex_lock(&ext_alloc_inode->i_mutex);
(*tc)->tc_ext_alloc_inode = ext_alloc_inode;
status = ocfs2_meta_lock(ext_alloc_inode,
if (tc->tc_ext_alloc_locked)
ocfs2_meta_unlock(tc->tc_ext_alloc_inode, 1);
- up(&tc->tc_ext_alloc_inode->i_sem);
+ mutex_unlock(&tc->tc_ext_alloc_inode->i_mutex);
iput(tc->tc_ext_alloc_inode);
}
struct config_group *o2hb_group = NULL, *ret = NULL;
void *defs = NULL;
- /* this runs under the parent dir's i_sem; there can be only
+ /* this runs under the parent dir's i_mutex; there can be only
* one caller in here at a time */
if (o2nm_single_cluster)
goto out; /* ENOSPC */
}
/*
- * NOTE: this should always be called with parent dir i_sem taken.
+ * NOTE: this should always be called with parent dir i_mutex taken.
*/
int ocfs2_find_files_on_disk(const char *name,
int namelen,
* Return 0 if the name does not exist
* Return -EEXIST if the directory contains the name
*
- * Callers should have i_sem + a cluster lock on dir
+ * Callers should have i_mutex + a cluster lock on dir
*/
int ocfs2_check_dir_for_entry(struct inode *dir,
const char *name,
}
/* blocks peope in read/write from reading our allocation
- * until we're done changing it. We depend on i_sem to block
+ * until we're done changing it. We depend on i_mutex to block
* other extend/truncate calls while we're here. Ordering wrt
* start_trans is important here -- always do it before! */
down_write(&OCFS2_I(inode)->ip_alloc_sem);
filp->f_flags &= ~O_DIRECT;
#endif
- down(&inode->i_sem);
- /* to match setattr's i_sem -> i_alloc_sem -> rw_lock ordering */
+ mutex_lock(&inode->i_mutex);
+ /* to match setattr's i_mutex -> i_alloc_sem -> rw_lock ordering */
if (filp->f_flags & O_DIRECT) {
have_alloc_sem = 1;
down_read(&inode->i_alloc_sem);
up_read(&inode->i_alloc_sem);
if (rw_level != -1)
ocfs2_rw_unlock(inode, rw_level);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
mlog_exit(ret);
return ret;
goto bail;
}
- down(&inode_alloc_inode->i_sem);
+ mutex_lock(&inode_alloc_inode->i_mutex);
status = ocfs2_meta_lock(inode_alloc_inode, NULL, &inode_alloc_bh, 1);
if (status < 0) {
- up(&inode_alloc_inode->i_sem);
+ mutex_unlock(&inode_alloc_inode->i_mutex);
mlog_errno(status);
goto bail;
ocfs2_commit_trans(handle);
bail_unlock:
ocfs2_meta_unlock(inode_alloc_inode, 1);
- up(&inode_alloc_inode->i_sem);
+ mutex_unlock(&inode_alloc_inode->i_mutex);
brelse(inode_alloc_bh);
bail:
iput(inode_alloc_inode);
/* Lock the orphan dir. The lock will be held for the entire
* delete_inode operation. We do this now to avoid races with
* recovery completion on other nodes. */
- down(&orphan_dir_inode->i_sem);
+ mutex_lock(&orphan_dir_inode->i_mutex);
status = ocfs2_meta_lock(orphan_dir_inode, NULL, &orphan_dir_bh, 1);
if (status < 0) {
- up(&orphan_dir_inode->i_sem);
+ mutex_unlock(&orphan_dir_inode->i_mutex);
mlog_errno(status);
goto bail;
bail_unlock_dir:
ocfs2_meta_unlock(orphan_dir_inode, 1);
- up(&orphan_dir_inode->i_sem);
+ mutex_unlock(&orphan_dir_inode->i_mutex);
brelse(orphan_dir_bh);
bail:
iput(orphan_dir_inode);
atomic_inc(&inode->i_count);
/* we're obviously changing it... */
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
/* sanity check */
BUG_ON(OCFS2_I(inode)->ip_handle);
OCFS2_I(inode)->ip_handle = NULL;
list_del_init(&OCFS2_I(inode)->ip_handle_list);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
iput(inode);
}
}
goto out;
}
- down(&orphan_dir_inode->i_sem);
+ mutex_lock(&orphan_dir_inode->i_mutex);
status = ocfs2_meta_lock(orphan_dir_inode, NULL, NULL, 0);
if (status < 0) {
- up(&orphan_dir_inode->i_sem);
+ mutex_unlock(&orphan_dir_inode->i_mutex);
mlog_errno(status);
goto out;
}
if (!bh)
status = -EINVAL;
if (status < 0) {
- up(&orphan_dir_inode->i_sem);
+ mutex_unlock(&orphan_dir_inode->i_mutex);
if (bh)
brelse(bh);
mlog_errno(status);
if (!ocfs2_check_dir_entry(orphan_dir_inode,
de, bh, local)) {
- up(&orphan_dir_inode->i_sem);
+ mutex_unlock(&orphan_dir_inode->i_mutex);
status = -EINVAL;
mlog_errno(status);
brelse(bh);
}
brelse(bh);
}
- up(&orphan_dir_inode->i_sem);
+ mutex_unlock(&orphan_dir_inode->i_mutex);
ocfs2_meta_unlock(orphan_dir_inode, 0);
have_disk_lock = 0;
goto bail;
}
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
status = ocfs2_read_block(osb, OCFS2_I(inode)->ip_blkno,
&alloc_bh, 0, inode);
brelse(alloc_bh);
if (inode) {
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
iput(inode);
}
/*
* make sure we've got at least bitswanted contiguous bits in the
- * local alloc. You lose them when you drop i_sem.
+ * local alloc. You lose them when you drop i_mutex.
*
* We will add ourselves to the transaction passed in, but may start
* our own in order to shift windows.
*/
static void ocfs2_write_super(struct super_block *sb)
{
- if (down_trylock(&sb->s_lock) == 0)
+ if (mutex_trylock(&sb->s_lock) != 0)
BUG();
sb->s_dirt = 0;
}
newattrs.ia_valid |= ATTR_FILE;
}
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
err = notify_change(dentry, &newattrs);
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
return err;
}
(error = vfs_permission(&nd, MAY_WRITE)) != 0)
goto dput_and_out;
}
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
error = notify_change(nd.dentry, &newattrs);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
dput_and_out:
path_release(&nd);
out:
(error = vfs_permission(&nd, MAY_WRITE)) != 0)
goto dput_and_out;
}
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
error = notify_change(nd.dentry, &newattrs);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
dput_and_out:
path_release(&nd);
out:
err = -EPERM;
if (IS_IMMUTABLE(inode) || IS_APPEND(inode))
goto out_putf;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
if (mode == (mode_t) -1)
mode = inode->i_mode;
newattrs.ia_mode = (mode & S_IALLUGO) | (inode->i_mode & ~S_IALLUGO);
newattrs.ia_valid = ATTR_MODE | ATTR_CTIME;
err = notify_change(dentry, &newattrs);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
out_putf:
fput(file);
if (IS_IMMUTABLE(inode) || IS_APPEND(inode))
goto dput_and_out;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
if (mode == (mode_t) -1)
mode = inode->i_mode;
newattrs.ia_mode = (mode & S_IALLUGO) | (inode->i_mode & ~S_IALLUGO);
newattrs.ia_valid = ATTR_MODE | ATTR_CTIME;
error = notify_change(nd.dentry, &newattrs);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
dput_and_out:
path_release(&nd);
}
if (!S_ISDIR(inode->i_mode))
newattrs.ia_valid |= ATTR_KILL_SUID|ATTR_KILL_SGID;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
error = notify_change(dentry, &newattrs);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
out:
return error;
}
* is considered a noninteractive wait:
*/
prepare_to_wait(PIPE_WAIT(*inode), &wait, TASK_INTERRUPTIBLE|TASK_NONINTERACTIVE);
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
schedule();
finish_wait(PIPE_WAIT(*inode), &wait);
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
}
static inline int
do_wakeup = 0;
ret = 0;
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
info = inode->i_pipe;
for (;;) {
int bufs = info->nrbufs;
}
pipe_wait(inode);
}
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
/* Signal writers asynchronously that there is more room. */
if (do_wakeup) {
wake_up_interruptible(PIPE_WAIT(*inode));
do_wakeup = 0;
ret = 0;
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
info = inode->i_pipe;
if (!PIPE_READERS(*inode)) {
PIPE_WAITING_WRITERS(*inode)--;
}
out:
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
if (do_wakeup) {
wake_up_interruptible(PIPE_WAIT(*inode));
kill_fasync(PIPE_FASYNC_READERS(*inode), SIGIO, POLL_IN);
switch (cmd) {
case FIONREAD:
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
info = inode->i_pipe;
count = 0;
buf = info->curbuf;
count += info->bufs[buf].len;
buf = (buf+1) & (PIPE_BUFFERS-1);
}
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
return put_user(count, (int __user *)arg);
default:
return -EINVAL;
static int
pipe_release(struct inode *inode, int decr, int decw)
{
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
PIPE_READERS(*inode) -= decr;
PIPE_WRITERS(*inode) -= decw;
if (!PIPE_READERS(*inode) && !PIPE_WRITERS(*inode)) {
kill_fasync(PIPE_FASYNC_READERS(*inode), SIGIO, POLL_IN);
kill_fasync(PIPE_FASYNC_WRITERS(*inode), SIGIO, POLL_OUT);
}
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
return 0;
}
struct inode *inode = filp->f_dentry->d_inode;
int retval;
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
retval = fasync_helper(fd, filp, on, PIPE_FASYNC_READERS(*inode));
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
if (retval < 0)
return retval;
struct inode *inode = filp->f_dentry->d_inode;
int retval;
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
retval = fasync_helper(fd, filp, on, PIPE_FASYNC_WRITERS(*inode));
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
if (retval < 0)
return retval;
struct inode *inode = filp->f_dentry->d_inode;
int retval;
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
retval = fasync_helper(fd, filp, on, PIPE_FASYNC_READERS(*inode));
if (retval >= 0)
retval = fasync_helper(fd, filp, on, PIPE_FASYNC_WRITERS(*inode));
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
if (retval < 0)
return retval;
{
/* We could have perhaps used atomic_t, but this and friends
below are the only places. So it doesn't seem worthwhile. */
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
PIPE_READERS(*inode)++;
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
return 0;
}
static int
pipe_write_open(struct inode *inode, struct file *filp)
{
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
PIPE_WRITERS(*inode)++;
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
return 0;
}
static int
pipe_rdwr_open(struct inode *inode, struct file *filp)
{
- down(PIPE_SEM(*inode));
+ mutex_lock(PIPE_MUTEX(*inode));
if (filp->f_mode & FMODE_READ)
PIPE_READERS(*inode)++;
if (filp->f_mode & FMODE_WRITE)
PIPE_WRITERS(*inode)++;
- up(PIPE_SEM(*inode));
+ mutex_unlock(PIPE_MUTEX(*inode));
return 0;
}
sync_blockdev(sb->s_bdev);
/* Now when everything is written we can discard the pagecache so
- * that userspace sees the changes. We need i_sem and so we could
+ * that userspace sees the changes. We need i_mutex and so we could
* not do it inside dqonoff_sem. Moreover we need to be carefull
* about races with quotaoff() (that is the reason why we have own
* reference to inode). */
up(&sb_dqopt(sb)->dqonoff_sem);
for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
if (discard[cnt]) {
- down(&discard[cnt]->i_sem);
+ mutex_lock(&discard[cnt]->i_mutex);
truncate_inode_pages(&discard[cnt]->i_data, 0);
- up(&discard[cnt]->i_sem);
+ mutex_unlock(&discard[cnt]->i_mutex);
iput(discard[cnt]);
}
}
long long retval;
struct inode *inode = file->f_mapping->host;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
switch (origin) {
case 2:
offset += inode->i_size;
}
retval = offset;
}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return retval;
}
if (res)
goto out;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
res = -ENOENT;
if (!IS_DEADDIR(inode)) {
res = file->f_op->readdir(file, buf, filler);
file_accessed(file);
}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
out:
return res;
}
}
reiserfs_write_lock(inode->i_sb);
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
/* freeing preallocation only involves relogging blocks that
* are already in the current transaction. preallocation gets
* freed at the end of each transaction, so it is impossible for
err = reiserfs_truncate_file(inode, 0);
}
out:
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
reiserfs_write_unlock(inode->i_sb);
return err;
}
if (unlikely(!access_ok(VERIFY_READ, buf, count)))
return -EFAULT;
- down(&inode->i_sem); // locks the entire file for just us
+ mutex_lock(&inode->i_mutex); // locks the entire file for just us
pos = *ppos;
generic_osync_inode(inode, file->f_mapping,
OSYNC_METADATA | OSYNC_DATA);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
reiserfs_async_progress_wait(inode->i_sb);
return (already_written != 0) ? already_written : res;
out:
- up(&inode->i_sem); // unlock the file on exit.
+ mutex_unlock(&inode->i_mutex); // unlock the file on exit.
return res;
}
/* The = 0 happens when we abort creating a new inode for some reason like lack of space.. */
if (!(inode->i_state & I_NEW) && INODE_PKEY(inode)->k_objectid != 0) { /* also handles bad_inode case */
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
reiserfs_delete_xattrs(inode);
if (journal_begin(&th, inode->i_sb, jbegin_count)) {
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
goto out;
}
reiserfs_update_inode_transaction(inode);
DQUOT_FREE_INODE(inode);
if (journal_end(&th, inode->i_sb, jbegin_count)) {
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
goto out;
}
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
/* check return value from reiserfs_delete_object after
* ending the transaction
/* we don't have to make sure the conversion did not happen while
** we were locking the page because anyone that could convert
- ** must first take i_sem.
+ ** must first take i_mutex.
**
** We must fix the tail page for writing because it might have buffers
** that are mapped, but have a block number of 0. This indicates tail
BUG_ON(!th->t_trans_id);
#ifdef REISERFS_PREALLOCATE
- if (!(flags & GET_BLOCK_NO_ISEM)) {
+ if (!(flags & GET_BLOCK_NO_IMUX)) {
return reiserfs_new_unf_blocknrs2(th, inode, allocated_block_nr,
path, block);
}
/* this is where we fill in holes in the file. */
if (use_get_block) {
retval = reiserfs_get_block(inode, block, bh_result,
- GET_BLOCK_CREATE | GET_BLOCK_NO_ISEM
+ GET_BLOCK_CREATE | GET_BLOCK_NO_IMUX
| GET_BLOCK_NO_DANGLE);
if (!retval) {
if (!buffer_mapped(bh_result)
/* we need to make sure nobody is changing the file size beneath
** us
*/
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
write_from = inode->i_size & (blocksize - 1);
/* if we are on a block boundary, we are already unpacked. */
page_cache_release(page);
out:
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
reiserfs_write_unlock(inode->i_sb);
return retval;
}
size_t towrite = len;
struct buffer_head tmp_bh, *bh;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
inode->i_version++;
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
mark_inode_dirty(inode);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return len - towrite;
}
1) * p_s_sb->s_blocksize;
pos1 = pos;
- // we are protected by i_sem. The tail can not disapper, not
+ // we are protected by i_mutex. The tail can not disapper, not
// append can be done either
// we are in truncate or packing tail in file_release
goto out;
} else if (!xaroot->d_inode) {
int err;
- down(&privroot->d_inode->i_sem);
+ mutex_lock(&privroot->d_inode->i_mutex);
err =
privroot->d_inode->i_op->mkdir(privroot->d_inode, xaroot,
0700);
- up(&privroot->d_inode->i_sem);
+ mutex_unlock(&privroot->d_inode->i_mutex);
if (err) {
dput(xaroot);
} else if (flags & XATTR_REPLACE || flags & FL_READONLY) {
goto out;
} else {
- /* inode->i_sem is down, so nothing else can try to create
+ /* inode->i_mutex is down, so nothing else can try to create
* the same xattr */
err = xadir->d_inode->i_op->create(xadir->d_inode, xafile,
0700 | S_IFREG, NULL);
* and don't mess with f->f_pos, but the idea is the same. Do some
* action on each and every entry in the directory.
*
- * we're called with i_sem held, so there are no worries about the directory
+ * we're called with i_mutex held, so there are no worries about the directory
* changing underneath us.
*/
static int __xattr_readdir(struct file *filp, void *dirent, filldir_t filldir)
int res = -ENOTDIR;
if (!file->f_op || !file->f_op->readdir)
goto out;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
// down(&inode->i_zombie);
res = -ENOENT;
if (!IS_DEADDIR(inode)) {
unlock_kernel();
}
// up(&inode->i_zombie);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
out:
return res;
}
/* Generic extended attribute operations that can be used by xa plugins */
/*
- * inode->i_sem: down
+ * inode->i_mutex: down
*/
int
reiserfs_xattr_set(struct inode *inode, const char *name, const void *buffer,
/* Resize it so we're ok to write there */
newattrs.ia_size = buffer_size;
newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME;
- down(&xinode->i_sem);
+ mutex_lock(&xinode->i_mutex);
err = notify_change(fp->f_dentry, &newattrs);
if (err)
goto out_filp;
}
out_filp:
- up(&xinode->i_sem);
+ mutex_unlock(&xinode->i_mutex);
fput(fp);
out:
}
/*
- * inode->i_sem: down
+ * inode->i_mutex: down
*/
int
reiserfs_xattr_get(const struct inode *inode, const char *name, void *buffer,
}
-/* This is called w/ inode->i_sem downed */
+/* This is called w/ inode->i_mutex downed */
int reiserfs_delete_xattrs(struct inode *inode)
{
struct file *fp;
/*
* Inode operation getxattr()
- * Preliminary locking: we down dentry->d_inode->i_sem
+ * Preliminary locking: we down dentry->d_inode->i_mutex
*/
ssize_t
reiserfs_getxattr(struct dentry * dentry, const char *name, void *buffer,
/*
* Inode operation setxattr()
*
- * dentry->d_inode->i_sem down
+ * dentry->d_inode->i_mutex down
*/
int
reiserfs_setxattr(struct dentry *dentry, const char *name, const void *value,
/*
* Inode operation removexattr()
*
- * dentry->d_inode->i_sem down
+ * dentry->d_inode->i_mutex down
*/
int reiserfs_removexattr(struct dentry *dentry, const char *name)
{
/*
* Inode operation listxattr()
*
- * Preliminary locking: we down dentry->d_inode->i_sem
+ * Preliminary locking: we down dentry->d_inode->i_mutex
*/
ssize_t reiserfs_listxattr(struct dentry * dentry, char *buffer, size_t size)
{
if (!IS_ERR(dentry)) {
if (!(mount_flags & MS_RDONLY) && !dentry->d_inode) {
struct inode *inode = dentry->d_parent->d_inode;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
err = inode->i_op->mkdir(inode, dentry, 0700);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
if (err) {
dput(dentry);
dentry = NULL;
/*
* Inode operation get_posix_acl().
*
- * inode->i_sem: down
+ * inode->i_mutex: down
* BKL held [before 2.5.x]
*/
struct posix_acl *reiserfs_get_acl(struct inode *inode, int type)
/*
* Inode operation set_posix_acl().
*
- * inode->i_sem: down
+ * inode->i_mutex: down
* BKL held [before 2.5.x]
*/
static int
return error;
}
-/* dir->i_sem: down,
+/* dir->i_mutex: locked,
* inode is new and not released into the wild yet */
int
reiserfs_inherit_default_acl(struct inode *dir, struct dentry *dentry,
}
parent = dget(parent);
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
d = lookup_one_len(name, parent, strlen(name));
if (IS_ERR(d)) {
d = NULL;
simple_release_fs(&relayfs_mount, &relayfs_mount_count);
exit:
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
dput(parent);
return d;
}
return -EINVAL;
parent = dget(parent);
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
if (dentry->d_inode) {
if (S_ISDIR(dentry->d_inode->i_mode))
error = simple_rmdir(parent->d_inode, dentry);
}
if (!error)
dput(dentry);
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
dput(parent);
if (!error)
ssize_t ret = 0;
void *from;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
if(!relay_file_read_avail(buf, *ppos))
goto out;
relay_file_read_consume(buf, read_start, count);
*ppos = relay_file_read_end_pos(buf, read_start, count);
out:
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return ret;
}
INIT_HLIST_HEAD(&s->s_anon);
INIT_LIST_HEAD(&s->s_inodes);
init_rwsem(&s->s_umount);
- sema_init(&s->s_lock, 1);
+ mutex_init(&s->s_lock);
down_write(&s->s_umount);
s->s_count = S_BIAS;
atomic_set(&s->s_active, 1);
int error;
umode_t mode = S_IFDIR| S_IRWXU | S_IRUGO | S_IXUGO;
- down(&p->d_inode->i_sem);
+ mutex_lock(&p->d_inode->i_mutex);
*d = lookup_one_len(n, p, strlen(n));
if (!IS_ERR(*d)) {
error = sysfs_make_dirent(p->d_fsdata, *d, k, mode, SYSFS_DIR);
dput(*d);
} else
error = PTR_ERR(*d);
- up(&p->d_inode->i_sem);
+ mutex_unlock(&p->d_inode->i_mutex);
return error;
}
struct dentry * parent = dget(d->d_parent);
struct sysfs_dirent * sd;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
d_delete(d);
sd = d->d_fsdata;
list_del_init(&sd->s_sibling);
pr_debug(" o %s removing done (%d)\n",d->d_name.name,
atomic_read(&d->d_count));
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
dput(parent);
}
return;
pr_debug("sysfs %s: removing dir\n",dentry->d_name.name);
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
parent_sd = dentry->d_fsdata;
list_for_each_entry_safe(sd, tmp, &parent_sd->s_children, s_sibling) {
if (!sd->s_element || !(sd->s_type & SYSFS_NOT_PINNED))
sysfs_drop_dentry(sd, dentry);
sysfs_put(sd);
}
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
remove_dir(dentry);
/**
down_write(&sysfs_rename_sem);
parent = kobj->parent->dentry;
- down(&parent->d_inode->i_sem);
+ mutex_lock(&parent->d_inode->i_mutex);
new_dentry = lookup_one_len(new_name, parent, strlen(new_name));
if (!IS_ERR(new_dentry)) {
error = -EEXIST;
dput(new_dentry);
}
- up(&parent->d_inode->i_sem);
+ mutex_unlock(&parent->d_inode->i_mutex);
up_write(&sysfs_rename_sem);
return error;
struct dentry * dentry = file->f_dentry;
struct sysfs_dirent * parent_sd = dentry->d_fsdata;
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
file->private_data = sysfs_new_dirent(parent_sd, NULL);
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
return file->private_data ? 0 : -ENOMEM;
struct dentry * dentry = file->f_dentry;
struct sysfs_dirent * cursor = file->private_data;
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
list_del_init(&cursor->s_sibling);
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
release_sysfs_dirent(cursor);
{
struct dentry * dentry = file->f_dentry;
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
switch (origin) {
case 1:
offset += file->f_pos;
if (offset >= 0)
break;
default:
- up(&file->f_dentry->d_inode->i_sem);
+ mutex_unlock(&file->f_dentry->d_inode->i_mutex);
return -EINVAL;
}
if (offset != file->f_pos) {
list_add_tail(&cursor->s_sibling, p);
}
}
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
return offset;
}
EXPORT_SYMBOL_GPL(sysfs_create_dir);
EXPORT_SYMBOL_GPL(sysfs_remove_dir);
EXPORT_SYMBOL_GPL(sysfs_rename_dir);
-
umode_t mode = (attr->mode & S_IALLUGO) | S_IFREG;
int error = 0;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
error = sysfs_make_dirent(parent_sd, NULL, (void *) attr, mode, type);
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
return error;
}
struct dentry * victim;
int res = -ENOENT;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
victim = lookup_one_len(attr->name, dir, strlen(attr->name));
if (!IS_ERR(victim)) {
/* make sure dentry is really there */
*/
dput(victim);
}
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
return res;
}
struct iattr newattrs;
int res = -ENOENT;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
victim = lookup_one_len(attr->name, dir, strlen(attr->name));
if (!IS_ERR(victim)) {
if (victim->d_inode &&
(victim->d_parent->d_inode == dir->d_inode)) {
inode = victim->d_inode;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
newattrs.ia_mode = (mode & S_IALLUGO) |
(inode->i_mode & ~S_IALLUGO);
newattrs.ia_valid = ATTR_MODE | ATTR_CTIME;
res = notify_change(victim, &newattrs);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
}
dput(victim);
}
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
return res;
}
EXPORT_SYMBOL_GPL(sysfs_create_file);
EXPORT_SYMBOL_GPL(sysfs_remove_file);
EXPORT_SYMBOL_GPL(sysfs_update_file);
-
/*
* Unhashes the dentry corresponding to given sysfs_dirent
- * Called with parent inode's i_sem held.
+ * Called with parent inode's i_mutex held.
*/
void sysfs_drop_dentry(struct sysfs_dirent * sd, struct dentry * parent)
{
/* no inode means this hasn't been made visible yet */
return;
- down(&dir->d_inode->i_sem);
+ mutex_lock(&dir->d_inode->i_mutex);
list_for_each_entry(sd, &parent_sd->s_children, s_sibling) {
if (!sd->s_element)
continue;
break;
}
}
- up(&dir->d_inode->i_sem);
+ mutex_unlock(&dir->d_inode->i_mutex);
}
-
-
BUG_ON(!kobj || !kobj->dentry || !name);
- down(&dentry->d_inode->i_sem);
+ mutex_lock(&dentry->d_inode->i_mutex);
error = sysfs_add_link(dentry, name, target);
- up(&dentry->d_inode->i_sem);
+ mutex_unlock(&dentry->d_inode->i_mutex);
return error;
}
EXPORT_SYMBOL_GPL(sysfs_create_link);
EXPORT_SYMBOL_GPL(sysfs_remove_link);
-
size_t towrite = len;
struct buffer_head *bh;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
while (towrite > 0) {
tocopy = sb->s_blocksize - offset < towrite ?
sb->s_blocksize - offset : towrite;
}
out:
if (len == towrite) {
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return err;
}
if (inode->i_size < off+len-towrite)
inode->i_version++;
inode->i_mtime = inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return len - towrite;
}
}
}
- down(&d->d_inode->i_sem);
+ mutex_lock(&d->d_inode->i_mutex);
error = security_inode_setxattr(d, kname, kvalue, size, flags);
if (error)
goto out;
fsnotify_xattr(d);
}
out:
- up(&d->d_inode->i_sem);
+ mutex_unlock(&d->d_inode->i_mutex);
kfree(kvalue);
return error;
}
error = security_inode_removexattr(d, kname);
if (error)
goto out;
- down(&d->d_inode->i_sem);
+ mutex_lock(&d->d_inode->i_mutex);
error = d->d_inode->i_op->removexattr(d, kname);
- up(&d->d_inode->i_sem);
+ mutex_unlock(&d->d_inode->i_mutex);
if (!error)
fsnotify_xattr(d);
}
#define __XFS_SUPPORT_MUTEX_H__
#include <linux/spinlock.h>
-#include <asm/semaphore.h>
+#include <linux/mutex.h>
/*
* Map the mutex'es from IRIX to Linux semaphores.
* callers.
*/
#define MUTEX_DEFAULT 0x0
-typedef struct semaphore mutex_t;
-#define mutex_init(lock, type, name) sema_init(lock, 1)
-#define mutex_destroy(lock) sema_init(lock, -99)
-#define mutex_lock(lock, num) down(lock)
-#define mutex_trylock(lock) (down_trylock(lock) ? 0 : 1)
-#define mutex_unlock(lock) up(lock)
+typedef struct mutex mutex_t;
+//#define mutex_destroy(lock) do{}while(0)
#endif /* __XFS_SUPPORT_MUTEX_H__ */
ip->i_nlink = va.va_nlink;
ip->i_blocks = va.va_nblocks;
- /* we're under i_sem so i_size can't change under us */
+ /* we're under i_mutex so i_size can't change under us */
if (i_size_read(ip) != va.va_size)
i_size_write(ip, va.va_size);
}
}
if (unlikely(ioflags & IO_ISDIRECT))
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
xfs_ilock(ip, XFS_IOLOCK_SHARED);
if (DM_EVENT_ENABLED(vp->v_vfsp, ip, DM_EVENT_READ) &&
unlock_isem:
if (unlikely(ioflags & IO_ISDIRECT))
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
return ret;
}
iolock = XFS_IOLOCK_EXCL;
locktype = VRWLOCK_WRITE;
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
} else {
iolock = XFS_IOLOCK_SHARED;
locktype = VRWLOCK_WRITE_DIRECT;
int dmflags = FILP_DELAY_FLAG(file);
if (need_isem)
- dmflags |= DM_FLAGS_ISEM;
+ dmflags |= DM_FLAGS_IMUX;
xfs_iunlock(xip, XFS_ILOCK_EXCL);
error = XFS_SEND_DATA(xip->i_mount, DM_EVENT_WRITE, vp,
if (need_isem) {
/* demote the lock now the cached pages are gone */
XFS_ILOCK_DEMOTE(mp, io, XFS_IOLOCK_EXCL);
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
iolock = XFS_IOLOCK_SHARED;
locktype = VRWLOCK_WRITE_DIRECT;
xfs_rwunlock(bdp, locktype);
if (need_isem)
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
error = XFS_SEND_NAMESP(xip->i_mount, DM_EVENT_NOSPACE, vp,
DM_RIGHT_NULL, vp, DM_RIGHT_NULL, NULL, NULL,
0, 0, 0); /* Delay flag intentionally unused */
if (error)
goto out_nounlocks;
if (need_isem)
- down(&inode->i_sem);
+ mutex_lock(&inode->i_mutex);
xfs_rwlock(bdp, locktype);
pos = xip->i_d.di_size;
ret = 0;
xfs_rwunlock(bdp, locktype);
if (need_isem)
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
error = sync_page_range(inode, mapping, pos, ret);
if (!error)
xfs_rwunlock(bdp, locktype);
out_unlock_isem:
if (need_isem)
- up(&inode->i_sem);
+ mutex_unlock(&inode->i_mutex);
out_nounlocks:
return -error;
}
*/
if (brandnewdquot) {
dqp->dq_flnext = dqp->dq_flprev = dqp;
- mutex_init(&dqp->q_qlock, MUTEX_DEFAULT, "xdq");
+ mutex_init(&dqp->q_qlock);
initnsema(&dqp->q_flock, 1, "fdq");
sv_init(&dqp->q_pinwait, SV_DEFAULT, "pdq");
xfs_dqlock(
xfs_dquot_t *dqp)
{
- mutex_lock(&(dqp->q_qlock), PINOD);
+ mutex_lock(&(dqp->q_qlock));
}
void
xqm->qm_dqfree_ratio = XFS_QM_DQFREE_RATIO;
xqm->qm_nrefs = 0;
#ifdef DEBUG
- mutex_init(&qcheck_lock, MUTEX_DEFAULT, "qchk");
+ xfs_mutex_init(&qcheck_lock, MUTEX_DEFAULT, "qchk");
#endif
return xqm;
}
qinf->qi_dqreclaims = 0;
/* mutex used to serialize quotaoffs */
- mutex_init(&qinf->qi_quotaofflock, MUTEX_DEFAULT, "qoff");
+ mutex_init(&qinf->qi_quotaofflock);
/* Precalc some constants */
qinf->qi_dqchunklen = XFS_FSB_TO_BB(mp, XFS_DQUOT_CLUSTER_SIZE_FSB);
char *str,
int n)
{
- mutex_init(&list->qh_lock, MUTEX_DEFAULT, str);
+ mutex_init(&list->qh_lock);
list->qh_next = NULL;
list->qh_version = 0;
list->qh_nelems = 0;
xfs_qm_freelist_init(xfs_frlist_t *ql)
{
ql->qh_next = ql->qh_prev = (xfs_dquot_t *) ql;
- mutex_init(&ql->qh_lock, MUTEX_DEFAULT, "dqf");
+ mutex_init(&ql->qh_lock);
ql->qh_version = 0;
ql->qh_nelems = 0;
}
{
xfs_dquot_t *dqp, *nextdqp;
- mutex_lock(&ql->qh_lock, PINOD);
+ mutex_lock(&ql->qh_lock);
for (dqp = ql->qh_next;
dqp != (xfs_dquot_t *)ql; ) {
xfs_dqlock(dqp);
#define XFS_QM_IWARNLIMIT 5
#define XFS_QM_RTBWARNLIMIT 5
-#define XFS_QM_LOCK(xqm) (mutex_lock(&xqm##_lock, PINOD))
+#define XFS_QM_LOCK(xqm) (mutex_lock(&xqm##_lock))
#define XFS_QM_UNLOCK(xqm) (mutex_unlock(&xqm##_lock))
#define XFS_QM_HOLD(xqm) ((xqm)->qm_nrefs++)
#define XFS_QM_RELE(xqm) ((xqm)->qm_nrefs--)
KERN_INFO "SGI XFS Quota Management subsystem\n";
printk(message);
- mutex_init(&xfs_Gqm_lock, MUTEX_DEFAULT, "xfs_qmlock");
+ mutex_init(&xfs_Gqm_lock);
vfs_bhv_set_custom(&xfs_qmops, &xfs_qmcore_xfs);
xfs_qm_init_procfs();
}
*/
ASSERT(mp->m_quotainfo);
if (mp->m_quotainfo)
- mutex_lock(&(XFS_QI_QOFFLOCK(mp)), PINOD);
+ mutex_lock(&(XFS_QI_QOFFLOCK(mp)));
ASSERT(mp->m_quotainfo);
/*
* Switch on quota enforcement in core.
*/
- mutex_lock(&(XFS_QI_QOFFLOCK(mp)), PINOD);
+ mutex_lock(&(XFS_QI_QOFFLOCK(mp)));
mp->m_qflags |= (flags & XFS_ALL_QUOTA_ENFD);
mutex_unlock(&(XFS_QI_QOFFLOCK(mp)));
* a quotaoff from happening). (XXXThis doesn't currently happen
* because we take the vfslock before calling xfs_qm_sysent).
*/
- mutex_lock(&(XFS_QI_QOFFLOCK(mp)), PINOD);
+ mutex_lock(&(XFS_QI_QOFFLOCK(mp)));
/*
* Get the dquot (locked), and join it to the transaction.
xfs_log_force(mp, (xfs_lsn_t)0, XFS_LOG_FORCE | XFS_LOG_SYNC);
XFS_bflush(mp->m_ddev_targp);
- mutex_lock(&qcheck_lock, PINOD);
+ mutex_lock(&qcheck_lock);
/* There should be absolutely no quota activity while this
is going on. */
qmtest_udqtab = kmem_zalloc(qmtest_hashmask *
#define XFS_QI_MPLNEXT(mp) ((mp)->m_quotainfo->qi_dqlist.qh_next)
#define XFS_QI_MPLNDQUOTS(mp) ((mp)->m_quotainfo->qi_dqlist.qh_nelems)
-#define XQMLCK(h) (mutex_lock(&((h)->qh_lock), PINOD))
+#define XQMLCK(h) (mutex_lock(&((h)->qh_lock)))
#define XQMUNLCK(h) (mutex_unlock(&((h)->qh_lock)))
#ifdef DEBUG
struct xfs_dqhash;
void
uuid_init(void)
{
- mutex_init(&uuid_monitor, MUTEX_DEFAULT, "uuid_monitor");
+ mutex_init(&uuid_monitor);
}
/*
{
int i, hole;
- mutex_lock(&uuid_monitor, PVFS);
+ mutex_lock(&uuid_monitor);
for (i = 0, hole = -1; i < uuid_table_size; i++) {
if (uuid_is_nil(&uuid_table[i])) {
hole = i;
{
int i;
- mutex_lock(&uuid_monitor, PVFS);
+ mutex_lock(&uuid_monitor);
for (i = 0; i < uuid_table_size; i++) {
if (uuid_is_nil(&uuid_table[i]))
continue;
#define DM_FLAGS_NDELAY 0x001 /* return EAGAIN after dm_pending() */
#define DM_FLAGS_UNWANTED 0x002 /* event not in fsys dm_eventset_t */
-#define DM_FLAGS_ISEM 0x004 /* thread holds i_sem */
+#define DM_FLAGS_IMUX 0x004 /* thread holds i_mutex */
#define DM_FLAGS_IALLOCSEM_RD 0x010 /* thread holds i_alloc_sem rd */
#define DM_FLAGS_IALLOCSEM_WR 0x020 /* thread holds i_alloc_sem wr */
*/
#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,0)
#define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \
- DM_FLAGS_ISEM : 0)
-#define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_ISEM)
+ DM_FLAGS_IMUX : 0)
+#define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_IMUX)
#endif
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,0)) && \
(LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,22))
#define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \
- DM_FLAGS_IALLOCSEM_RD : DM_FLAGS_ISEM)
-#define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_ISEM)
+ DM_FLAGS_IALLOCSEM_RD : DM_FLAGS_IMUX)
+#define DM_SEM_FLAG_WR (DM_FLAGS_IALLOCSEM_WR | DM_FLAGS_IMUX)
#endif
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,4,21)
#define DM_SEM_FLAG_RD(ioflags) (((ioflags) & IO_ISDIRECT) ? \
- 0 : DM_FLAGS_ISEM)
-#define DM_SEM_FLAG_WR (DM_FLAGS_ISEM)
+ 0 : DM_FLAGS_IMUX)
+#define DM_SEM_FLAG_WR (DM_FLAGS_IMUX)
#endif
AIL_LOCKINIT(&mp->m_ail_lock, "xfs_ail");
spinlock_init(&mp->m_sb_lock, "xfs_sb");
- mutex_init(&mp->m_ilock, MUTEX_DEFAULT, "xfs_ilock");
+ mutex_init(&mp->m_ilock);
initnsema(&mp->m_growlock, 1, "xfs_grow");
/*
* Initialize the AIL.
int msb_delta; /* Change to make to specified field */
} xfs_mod_sb_t;
-#define XFS_MOUNT_ILOCK(mp) mutex_lock(&((mp)->m_ilock), PINOD)
+#define XFS_MOUNT_ILOCK(mp) mutex_lock(&((mp)->m_ilock))
#define XFS_MOUNT_IUNLOCK(mp) mutex_unlock(&((mp)->m_ilock))
#define XFS_SB_LOCK(mp) mutex_spinlock(&(mp)->m_sb_lock)
#define XFS_SB_UNLOCK(mp,s) mutex_spinunlock(&(mp)->m_sb_lock,(s))
}
#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n)))
+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
#define atomic_add_unless(v, a, u) \
({ \
--- /dev/null
+/*
+ * Pull in the generic implementation for the mutex fastpath.
+ *
+ * TODO: implement optimized primitives instead, or leave the generic
+ * implementation in place, or pick the atomic_xchg() based generic
+ * implementation. (see asm-generic/mutex-xchg.h for details)
+ */
+
+#include <asm-generic/mutex-dec.h>
#endif /* __LINUX_ARM_ARCH__ */
+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+
static inline int atomic_add_unless(atomic_t *v, int a, int u)
{
int c, old;
--- /dev/null
+/*
+ * include/asm-arm/mutex.h
+ *
+ * ARM optimized mutex locking primitives
+ *
+ * Please look into asm-generic/mutex-xchg.h for a formal definition.
+ */
+#ifndef _ASM_MUTEX_H
+#define _ASM_MUTEX_H
+
+#if __LINUX_ARM_ARCH__ < 6
+/* On pre-ARMv6 hardware the swp based implementation is the most efficient. */
+# include <asm-generic/mutex-xchg.h>
+#else
+
+/*
+ * Attempting to lock a mutex on ARMv6+ can be done with a bastardized
+ * atomic decrement (it is not a reliable atomic decrement but it satisfies
+ * the defined semantics for our purpose, while being smaller and faster
+ * than a real atomic decrement or atomic swap. The idea is to attempt
+ * decrementing the lock value only once. If once decremented it isn't zero,
+ * or if its store-back fails due to a dispute on the exclusive store, we
+ * simply bail out immediately through the slow path where the lock will be
+ * reattempted until it succeeds.
+ */
+#define __mutex_fastpath_lock(count, fail_fn) \
+do { \
+ int __ex_flag, __res; \
+ \
+ typecheck(atomic_t *, count); \
+ typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \
+ \
+ __asm__ ( \
+ "ldrex %0, [%2] \n" \
+ "sub %0, %0, #1 \n" \
+ "strex %1, %0, [%2] \n" \
+ \
+ : "=&r" (__res), "=&r" (__ex_flag) \
+ : "r" (&(count)->counter) \
+ : "cc","memory" ); \
+ \
+ if (unlikely(__res || __ex_flag)) \
+ fail_fn(count); \
+} while (0)
+
+#define __mutex_fastpath_lock_retval(count, fail_fn) \
+({ \
+ int __ex_flag, __res; \
+ \
+ typecheck(atomic_t *, count); \
+ typecheck_fn(fastcall int (*)(atomic_t *), fail_fn); \
+ \
+ __asm__ ( \
+ "ldrex %0, [%2] \n" \
+ "sub %0, %0, #1 \n" \
+ "strex %1, %0, [%2] \n" \
+ \
+ : "=&r" (__res), "=&r" (__ex_flag) \
+ : "r" (&(count)->counter) \
+ : "cc","memory" ); \
+ \
+ __res |= __ex_flag; \
+ if (unlikely(__res != 0)) \
+ __res = fail_fn(count); \
+ __res; \
+})
+
+/*
+ * Same trick is used for the unlock fast path. However the original value,
+ * rather than the result, is used to test for success in order to have
+ * better generated assembly.
+ */
+#define __mutex_fastpath_unlock(count, fail_fn) \
+do { \
+ int __ex_flag, __res, __orig; \
+ \
+ typecheck(atomic_t *, count); \
+ typecheck_fn(fastcall void (*)(atomic_t *), fail_fn); \
+ \
+ __asm__ ( \
+ "ldrex %0, [%3] \n" \
+ "add %1, %0, #1 \n" \
+ "strex %2, %1, [%3] \n" \
+ \
+ : "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag) \
+ : "r" (&(count)->counter) \
+ : "cc","memory" ); \
+ \
+ if (unlikely(__orig || __ex_flag)) \
+ fail_fn(count); \
+} while (0)
+
+/*
+ * If the unlock was done on a contended lock, or if the unlock simply fails
+ * then the mutex remains locked.
+ */
+#define __mutex_slowpath_needs_to_unlock() 1
+
+/*
+ * For __mutex_fastpath_trylock we use another construct which could be
+ * described as a "single value cmpxchg".
+ *
+ * This provides the needed trylock semantics like cmpxchg would, but it is
+ * lighter and less generic than a true cmpxchg implementation.
+ */
+static inline int
+__mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *))
+{
+ int __ex_flag, __res, __orig;
+
+ __asm__ (
+
+ "1: ldrex %0, [%3] \n"
+ "subs %1, %0, #1 \n"
+ "strexeq %2, %1, [%3] \n"
+ "movlt %0, #0 \n"
+ "cmpeq %2, #0 \n"
+ "bgt 1b \n"
+
+ : "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag)
+ : "r" (&count->counter)
+ : "cc", "memory" );
+
+ return __orig;
+}
+
+#endif
+#endif
return ret;
}
+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+
static inline int atomic_add_unless(atomic_t *v, int a, int u)
{
int ret;
return ret;
}
+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+
static inline int atomic_add_unless(atomic_t *v, int a, int u)
{
int ret;
--- /dev/null
+/*
+ * Pull in the generic implementation for the mutex fastpath.
+ *
+ * TODO: implement optimized primitives instead, or leave the generic
+ * implementation in place, or pick the atomic_xchg() based generic
+ * implementation. (see asm-generic/mutex-xchg.h for details)
+ */
+
+#include <asm-generic/mutex-dec.h>
#endif
#define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new))
+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
#define atomic_add_unless(v, a, u) \
({ \
--- /dev/null
+/*
+ * Pull in the generic implementation for the mutex fastpath.
+ *
+ * TODO: implement optimized primitives instead, or leave the generic
+ * implementation in place, or pick the atomic_xchg() based generic
+ * implementation. (see asm-generic/mutex-xchg.h for details)
+ */
+
+#include <asm-generic/mutex-dec.h>
--- /dev/null
+/*
+ * asm-generic/mutex-dec.h
+ *
+ * Generic implementation of the mutex fastpath, based on atomic
+ * decrement/increment.
+ */
+#ifndef _ASM_GENERIC_MUTEX_DEC_H
+#define _ASM_GENERIC_MUTEX_DEC_H
+
+/**
+ * __mutex_fastpath_lock - try to take the lock by moving the count
+ * from 1 to a 0 value
+ * @count: pointer of type atomic_t
+ * @fail_fn: function to call if the original value was not 1
+ *
+ * Change the count from 1 to a value lower than 1, and call <fail_fn> if
+ * it wasn't 1 originally. This function MUST leave the value lower than
+ * 1 even when the "1" assertion wasn't true.
+ */
+#define __mutex_fastpath_lock(count, fail_fn) \
+do { \
+ if (unlikely(atomic_dec_return(count) < 0)) \
+ fail_fn(count); \
+ else \
+ smp_mb(); \
+} while (0)
+
+/**
+ * __mutex_fastpath_lock_retval - try to take the lock by moving the count
+ * from 1 to a 0 value
+ * @count: pointer of type atomic_t
+ * @fail_fn: function to call if the original value was not 1
+ *
+ * Change the count from 1 to a value lower than 1, and call <fail_fn> if
+ * it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
+ * or anything the slow path function returns.
+ */
+static inline int
+__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
+{
+ if (unlikely(atomic_dec_return(count) < 0))
+ return fail_fn(count);
+ else {
+ smp_mb();
+ return 0;
+ }
+}
+
+/**
+ * __mutex_fastpath_unlock - try to promote the count from 0 to 1
+ * @count: pointer of type atomic_t
+ * @fail_fn: function to call if the original value was not 0
+ *
+ * Try to promote the count from 0 to 1. If it wasn't 0, call <fail_fn>.
+ * In the failure case, this function is allowed to either set the value to
+ * 1, or to set it to a value lower than 1.
+ *
+ * If the implementation sets it to a value of lower than 1, then the
+ * __mutex_slowpath_needs_to_unlock() macro needs to return 1, it needs
+ * to return 0 otherwise.
+ */
+#define __mutex_fastpath_unlock(count, fail_fn) \
+do { \
+ smp_mb(); \
+ if (unlikely(atomic_inc_return(count) <= 0)) \
+ fail_fn(count); \
+} while (0)
+
+#define __mutex_slowpath_needs_to_unlock() 1
+
+/**
+ * __mutex_fastpath_trylock - try to acquire the mutex, without waiting
+ *
+ * @count: pointer of type atomic_t
+ * @fail_fn: fallback function
+ *
+ * Change the count from 1 to a value lower than 1, and return 0 (failure)
+ * if it wasn't 1 originally, or return 1 (success) otherwise. This function
+ * MUST leave the value lower than 1 even when the "1" assertion wasn't true.
+ * Additionally, if the value was < 0 originally, this function must not leave
+ * it to 0 on failure.
+ *
+ * If the architecture has no effective trylock variant, it should call the
+ * <fail_fn> spinlock-based trylock variant unconditionally.
+ */
+static inline int
+__mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *))
+{
+ /*
+ &n