Chapter 8-2
Introduction
We will develop exclusion mechanism previously mentioned.
Beside construction, we need to abstract possible situation of data sharing. A usual native thought is a thread want to modify one thing but due to thread switch, the data is already modified and we get wrong result. So based on this, we want a operation to be Atomic, which means the operation excluding others. Now we can alleviate this restriction and generalize this.
Generalization:
- Allow multiple but finite thread can join one atomic operation.
- Allow condition of atomic operation.
Before such generalization, we want a way to represent atomic operation. We call the content of this operation Critical Section, and multiple threads operations in indeterminate time sequence Race Condition. So the basic problem of data sharing push us to identify multiple different operations by different threads, we can’t restrict data because the problem is on modification by threads, we need to Lock operations!
So, it there’s a lock sharing by threads, each threads can declare Lock it!, and no other threads can access this thread again.
Now, back to our generalization. If this lock has a bound of access number, many can access until reaching a bound. That’s also a reasonable design, we call this Semaphore; If this lock has a signal which one thread can send it to others to allow others to access it, That’s also a reasonable design, we call this Condition Variable.
If the real minimal sharing thing is Lock rather than data, we can discard so called data problem, and focus on lock itself, each threads can do anything in this lock and excluding others.
Design
No matter which kinds of lock, this is shared among threads.
1 | pub struct ProcessControlBlock { |
In such design, one lock can push one thread to wait_queue
to stop it, and pop front to start it. data
is a generalization for various locks.
Then, in one process, it owns many locks used in various conditions, one can easily take it as a generalization of many data(actually nothing related to real data) we want to share.
Basic Lock
Now, we want to construct a basic lock allowing simple lock
, unlock
operation.
1 | pub trait Mutex: Sync + Send { |
Usually, there’s U-level, M-level, S-level implementation. First, we gonna try first one easily, knowing the heuristic design of M-level, and extend basic thought to S-level.
U-level
A naive approach is to declare a global boolean indicating block state. lock
will wait if the boolean is true and try to set it to true, and unlock
will set it to false to release.
1 | static mut mutex :i32 = 0; |
However, that’s wrong! We can’t construct lock by things we want to lock! Threads can jump in any instructions and break it! That’s means we can’t do it in U-level? We should ponder further in real situation, imagine two threads modify one thing in nearly same time, if we could set two global state in a operation that excluding each other(for example, one state set to 1 and another state set to 0), then only one operation can really be implemented and we can check this condition, allow it to get the lock.
1 | static mut flag : [i32;2] = [0,0]; // 哪个线程想拿到锁? |
Now analyze the code, we find that no matter which flag is 1, or both 1, indicating certain thread want to get lock, turn
will be a excluding state to flag
, which means if another thread modify turn
in same time, the turn can only be in one of the state and only one thread can get the lock.
M-level
Is there any predefined operation in instructions that is atomic? Then we can use it as a lock. The answer is Yes, in RISC-V, it’s:
- AMO: Atomic memory operation
- LR/SC: Load Reserved/Store Conditional
AMO: will read the value in memory and write new value, then store the old value to target register(s.t. amoadd.w rd, rs2, (rs1)
).
LR/SC: LR will read memory and store in target register, and leave the addr of this memory, then SC could check the addr and write data to this addr, output a condition(0/1) to target register.(s.t. lr.w rd, (rs1)
, sc.w rd, rs2, (rs1)
)
We can use it to implement a atomic function:
1 | # RISC-V sequence for implementing a TAS at (s1) |
Here the logic of Try
is mem[s1]
would be zero if it’s unlocked, and would be non-zero if it’s locked. So, Try
will compare t1
and x0
, actually mem[s1]
and 0
, if equal to zero, then try to store t2
into s1
, if succeed, it will compare it again for the output signal t0
and x0
, actually the output signal and 0
, if succeed, it will jump out otherwise repeat.In this process, if the write operation failed, t0
would be non-zero, and repeat in Try
.
If we want to Unlock
, we write x0
to s1
to set mem[s1]
to zero. Which is the unlocked state.
S-level
Then we could take the function to rust and package it. A simple refactor is when we in repetition loop, we yield
, and give CPU to others.
Now, for any kinds of locks, we could apply it to our structure.
First, when we create a lock, we create and push it to list or set in empty element.
1 | // os/src/syscall/sync.rs |
When we call lock
, we should provide corresponding id of the lock, if it’s already locked, we push to wait_queue
, else we lock it and goes on.
1 | // os/src/syscall/sync.rs |
Same reverse operation:
1 | // os/src/syscall/sync.rs |
Semaphore
It’s simple, we only need to switch boolean to number and check the bound. So, the initiated count is the bound, if one thread access, it will minus one, and release, add one. We only need to check positive or negative.
Apply our structure:
1 | pub fn up(&self) { |
If the initiated count equal to 1
, we back to mutex
!, which indicates sole thread access!
Actually, we could use it for synchronization operation, we set count to 0
, if one thread access, it will be blocked, and another thread will could release and add one to count, then the original thread finally could access. Then the second thread will always be advanced to first one.
Here, the first is always advanced to second.
1 | const SEM_SYNC: usize = 0; //信号量ID |
Conditional Variable
If we want one thread owns the ability of release lock for others, we need the CondVar
. We have to dispatch operation in wait_queue
, if one thread signal
others, it will pop out a thread, which means trigger it You are free!. And if one thread wait
, it will push itself to queue to wait, The unlock and lock is important because in wait operation, it allow other thread to modify condition, but it should be after of the push operation, in case that the signal is before the push, then we can never receive the signal again! We won’t encapsulate condition check to CondVar
because it should leave to user to design it, we only leave out interface for user.
1 | pub fn signal(&self) { |
However, if condition check is leave out to user, we can’t ensure the condition be violated due to data sharing, so usually we need to append mutex
lock for this section.
1 | static mut A: usize = 0; //全局变量 |
We can see that if A=1
, second won’t wait
repeatly, and goes out.