In our previous blog we covered linux provides atomic operation to guarantee atomicity at the level of integer or bits but what about if there is a need to have atomicity at complex data that spans beyond simple integer or bits and bytes.
To have achieve higher level atomicity linux provides locks that incur performance penalty compare to atomic operation.
Spinlock is heavily used lock in linux. The spinlock is lock that allows the execution thread to busy loop instead of going in WAIT state(relinquish it's quantum, causing context switch) . The thread in contention to acquire spin lock doesn't get preempted on current core till the lock is released by thread holding the spin lock on the core.
The threads in contention to acquire spinlock waste processor cycle, so it's always advisable to use the spin lock for short duration.
There are other locks which puts the contented thread to sleep thus yielding the processor causing context switch.
Common spinlock functions are
spin_lock(spinlock_t *)
spin_lock_irqsave(spinlock_t *, long flags) : disable interrupts on local processor only
spin_lock_irq(spinlock_t *) avoid using this, it's should be called only interrupts are not disabled.
spin_lock_bh(); disable to the softirq and acquire the lock.
Below is implementation of spinlock_lock and spinlock_unlock using GCC __builtins functions
#include <stdio.h>
typedef struct spinlock {
volatile lock;
} spinlock_t;
void spinlock_init(spinlock_t *lock) {
lock->lock = 0;
}
void spinlock_lock(spinlock_t *lock) {
while(__sync_lock_test_and_set(&lock->lock, 1)) {
;
}
}
void spinlock_unlock(spinlock_t *lock) {
__sync_lock_release(&lock->lock);
}
int
main() {
spinlock_t lock;
spinlock_init(&lock);
spinlock_lock(&lock);
spinlock_unlock(&lock);
}
To have achieve higher level atomicity linux provides locks that incur performance penalty compare to atomic operation.
Spinlock is heavily used lock in linux. The spinlock is lock that allows the execution thread to busy loop instead of going in WAIT state(relinquish it's quantum, causing context switch) . The thread in contention to acquire spin lock doesn't get preempted on current core till the lock is released by thread holding the spin lock on the core.
The threads in contention to acquire spinlock waste processor cycle, so it's always advisable to use the spin lock for short duration.
There are other locks which puts the contented thread to sleep thus yielding the processor causing context switch.
Common spinlock functions are
spin_lock(spinlock_t *)
spin_lock_irqsave(spinlock_t *, long flags) : disable interrupts on local processor only
spin_lock_irq(spinlock_t *) avoid using this, it's should be called only interrupts are not disabled.
spin_lock_bh(); disable to the softirq and acquire the lock.
Below is implementation of spinlock_lock and spinlock_unlock using GCC __builtins functions
#include <stdio.h>
typedef struct spinlock {
volatile lock;
} spinlock_t;
void spinlock_init(spinlock_t *lock) {
lock->lock = 0;
}
void spinlock_lock(spinlock_t *lock) {
while(__sync_lock_test_and_set(&lock->lock, 1)) {
;
}
}
void spinlock_unlock(spinlock_t *lock) {
__sync_lock_release(&lock->lock);
}
int
main() {
spinlock_t lock;
spinlock_init(&lock);
spinlock_lock(&lock);
spinlock_unlock(&lock);
}