0%

Chapter 7-1

Introduction

We gonna abstract Stdin and Stdout by file, and insert into file descriptor. Therefore support Pipe operation and IO Redirection across each process.

Everything Is a File

The design philosophy of Everything is a file will generalize everything to file based on IO operations while omit concrete content semantics.

Abstraction of IO hardware:

  • read-only: s.t. keyboard
  • write-only: s.t. screen
  • read-write: s.t. serial device

Abstraction of IO operations(based on file descriptor):

  • open: open file while possessing it by certain process.
  • close: close file while discarding it by certain process.
  • read: read file into memory.
  • write: write file from memory.

When a process is created, it owns three file as operation abstraction:

  • 0: Stdin
  • 1: Stdout
  • 2: Stderr(which we will merge with Stdout)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
impl TaskControlBlock {
pub fn new(elf_data: &[u8]) -> Self {
...
let task_control_block = Self {
pid: pid_handle,
kernel_stack,
inner: Mutex::new(TaskControlBlockInner {
// ...
fd_table: vec![
// 0 -> stdin
Some(Arc::new(Stdin)),
// 1 -> stdout
Some(Arc::new(Stdout)),
// 2 -> stderr
Some(Arc::new(Stdout)),
],
}),
};
...
}
}

Pipe

In usual shell, | is the symbolic of pipe. Manage input from left and output to right. If we abstract everything to file, s.t. Stdin or Stdout, so does Pipe, it has read and write ends, user could read thing from this end and write thing(often in child process) to other end, transfer those underneath thing.

We already has file descriptor as the indication of file, we will implement same operation for pipe.

sys_pipe get the ptr of a array with len = 2, output the write and the read ends of descriptors of pipe in the ptr.

1
2
3
4
5
6
7
// user/src/syscall.rs

const SYSCALL_PIPE: usize = 59;

pub fn sys_pipe(pipe: &mut [usize]) -> isize {
syscall(SYSCALL_PIPE, [pipe.as_mut_ptr() as usize, 0, 0])
}

So What’s the basic design of pipe?

It should has write and read ends which means ends share the same data, and record read and write informations on this data. We will construct RingBuffer to achieve this. Pipe owns a buffer control read and write, buffer will record data from head to tail index. Why we can’t just use two piece of data or Queue?

Because there’s no copy and suitable for our restriction! We will read data from head and move forward and push data to end in a fixed array rather allocation for Queue.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
// os/src/fs/pipe.rs

pub struct Pipe {
readable: bool,
writable: bool,
buffer: Arc<Mutex<PipeRingBuffer>>,
}

const RING_BUFFER_SIZE: usize = 32;

#[derive(Copy, Clone, PartialEq)]
enum RingBufferStatus {
FULL,
EMPTY,
NORMAL,
}

pub struct PipeRingBuffer {
arr: [u8; RING_BUFFER_SIZE],
head: usize, // head index of ring buffer
tail: usize, // tail index of ring buffer
status: RingBufferStatus,
write_end: Option<Weak<Pipe>>,
}

impl PipeRingBuffer {
pub fn set_write_end(&mut self, write_end: &Arc<Pipe>) {
self.write_end = Some(Arc::downgrade(write_end));
}
}

/// Return (read_end, write_end)
pub fn make_pipe() -> (Arc<Pipe>, Arc<Pipe>) {
let buffer = Arc::new(Mutex::new(PipeRingBuffer::new()));
let read_end = Arc::new(
Pipe::read_end_with_buffer(buffer.clone())
);
let write_end = Arc::new(
Pipe::write_end_with_buffer(buffer.clone())
);
buffer.lock().set_write_end(&write_end);
(read_end, write_end)
}

impl PipeRingBuffer {
pub fn read_byte(&mut self) -> u8 {
self.status = RingBufferStatus::NORMAL;
let c = self.arr[self.head];
// move forward
self.head = (self.head + 1) % RING_BUFFER_SIZE;
if self.head == self.tail {
self.status = RingBufferStatus::EMPTY;
}
c
}
pub fn available_read(&self) -> usize {
if self.status == RingBufferStatus::EMPTY {
0
} else {
// data from head to tail!
if self.tail > self.head {
self.tail - self.head
} else {
self.tail + RING_BUFFER_SIZE - self.head
}
}
}
pub fn all_write_ends_closed(&self) -> bool {
self.write_end.as_ref().unwrap().upgrade().is_none()
}
}

In one process, there’s a possible that can’t read all thing in once, if so, we will pause and run other thing until the write end is finished.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// os/src/fs/pipe.rs

impl File for Pipe {
fn read(&self, buf: UserBuffer) -> usize {
assert!(self.readable());
let want_to_read = buf.len();
let mut buf_iter = buf.into_iter();
let mut already_read = 0usize;
loop {
let mut ring_buffer = self.buffer.exclusive_access();
let loop_read = ring_buffer.available_read();
if loop_read == 0 {
if ring_buffer.all_write_ends_closed() {
return already_read;
}
drop(ring_buffer);
suspend_current_and_run_next();
continue;
}
for _ in 0..loop_read {
if let Some(byte_ref) = buf_iter.next() {
unsafe {
*byte_ref = ring_buffer.read_byte();
}
already_read += 1;
if already_read == want_to_read {
return want_to_read;
}
} else {
return already_read;
}
}
}
}
}

Arguments

We will combine our pipe with our shell.

First, parse our arguments and push 0 to end to indicated end.

1
2
3
4
5
6
7
8
9
10
11
12
// user/src/bin/user_shell.rs

let args: Vec<_> = line.as_str().split(' ').collect();
let mut args_addr: Vec<*const u8> = args
.iter()
.map(|&arg| {
let s = arg.to_string();
s.push('\0');
s.as_ptr()
})
.collect();
args_addr.push(0 as *const u8)

Now task will accept a series of args rather than solely one string. So make sys_exec to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
// os/src/syscall/process.rs

pub fn sys_exec(path: *const u8, mut args: *const usize) -> isize {
let token = current_user_token();
let path = translated_str(token, path);
let mut args_vec: Vec<String> = Vec::new();
// args would be a ptr of array contains ptr of string.
loop {
let arg_str_ptr = *translated_ref(token, args);
if arg_str_ptr == 0 {
break;
}
args_vec.push(translated_str(token, arg_str_ptr as *const u8));
unsafe { args = args.add(1); }
}
if let Some(app_inode) = open_file(path.as_str(), OpenFlags::RDONLY) {
let all_data = app_inode.read_all();
let task = current_task().unwrap();
let argc = args_vec.len();
task.exec(all_data.as_slice(), args_vec);
// return argc because cx.x[10] will be covered with it later
argc as isize
} else {
-1
}
}

Now, we really gonna use user stack to store these args!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
// os/src/task/task.rs

impl TaskControlBlock {
// notice exec will allocate a new memory set!
pub fn exec(&self, elf_data: &[u8], args: Vec<String>) {
// ...
// first allocate memory for ptr of strings.
user_sp -= (args.len() + 1) * core::mem::size_of::<usize>();
let argv_base = user_sp;
// allocate new memory in user stack addr as a vector of strings
let mut argv: Vec<_> = (0..=args.len())
.map(|arg| {
translated_refmut(
memory_set.token(),
(argv_base + arg * core::mem::size_of::<usize>()) as *mut usize
)
})
.collect();
*argv[args.len()] = 0;
for i in 0..args.len() {
// allocate for strings themselves.
user_sp -= args[i].len() + 1;
*argv[i] = user_sp;
let mut p = user_sp;
for c in args[i].as_bytes() {
*translated_refmut(memory_set.token(), p as *mut u8) = *c;
p += 1;
}
*translated_refmut(memory_set.token(), p as *mut u8) = 0;
}
// make the user_sp aligned to 8B for k210 platform
user_sp -= user_sp % core::mem::size_of::<usize>();

// **** hold current PCB lock
let mut inner = self.acquire_inner_lock();
// substitute memory_set
inner.memory_set = memory_set;
// update trap_cx ppn
inner.trap_cx_ppn = trap_cx_ppn;
// initialize trap_cx
let mut trap_cx = TrapContext::app_init_context(
entry_point,
user_sp,
KERNEL_SPACE.lock().token(),
self.kernel_stack.get_top(),
trap_handler as usize,
);
// a[0] be args len
trap_cx.x[10] = args.len();
// a[1] be args base addr
trap_cx.x[11] = argv_base;
*inner.get_trap_cx() = trap_cx;
// **** release current PCB lock
}
}

Now we provide receive operation in _start, in which main could use it at first time S-level reading data and passing to U-level:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// user/src/lib.rs

#[no_mangle]
#[link_section = ".text.entry"]
pub extern "C" fn _start(argc: usize, argv: usize) -> ! {
unsafe {
HEAP.lock()
.init(HEAP_SPACE.as_ptr() as usize, USER_HEAP_SIZE);
}
let mut v: Vec<&'static str> = Vec::new();
for i in 0..argc {
let str_start = unsafe {
((argv + i * core::mem::size_of::<usize>()) as *const usize).read_volatile()
};
let len = (0usize..).find(|i| unsafe {
((str_start + *i) as *const u8).read_volatile() == 0
}).unwrap();
v.push(
core::str::from_utf8(unsafe {
core::slice::from_raw_parts(str_start as *const u8, len)
}).unwrap()
);
}
exit(main(argc, v.as_slice()));
}

Redirection

Redirection usually represent using > and < for output and input.

If we really want to redirect IO, we will combine user_shell and sys_dup.

First, sys_dup will duplicate a new file descriptor already opened in this process.

Then we parse user arguments, if there exist > or <, fork a new child process, open the file and close our corresponding Stdin and Stdout descriptor, using dup to hold the place of it by file itself! Then exec by original parsed arguments, and receive results in parent process.

Chapter 7-2

Introduction

If a process want to notify other process with event semantics, such one-side mechanism called Signal, one process received specific event will pause and implement corresponding operation to handle the notification.

For example, a program could receive the stop event sended by Ctrl+C, and stop itself.

The abstraction of handling of signal:

  • ignore: do own thing and ignore signal
  • trap: call corresponding operation of the received signal
  • stop: stop itself

Now, beside this raw idea, we want to classify such abstraction with specified data.


Signal Data

First, we define raw info for each possible event.

1
2
3
4
5
6
7
8
9
10
11
12
13
// user/src/lib.rs

pub const SIGDEF: i32 = 0; // Default signal handling
pub const SIGHUP: i32 = 1;
pub const SIGINT: i32 = 2;
pub const SIGQUIT: i32 = 3;
pub const SIGILL: i32 = 4;
pub const SIGTRAP: i32 = 5;
pub const SIGABRT: i32 = 6;
pub const SIGBUS: i32 = 7;
pub const SIGFPE: i32 = 8;
pub const SIGKILL: i32 = 9;
...

So, what if a process want to omit the signal, what should this process do? We will introduce Mask in bit design, which means higher number contains lower number, indicating higher priority.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// user/src/lib.rs

bitflags! {
pub struct SignalFlags: i32 {
const SIGDEF = 1; // Default signal handling
const SIGHUP = 1 << 1;
const SIGINT = 1 << 2;
const SIGQUIT = 1 << 3;
const SIGILL = 1 << 4;
const SIGTRAP = 1 << 5;
...
const SIGSYS = 1 << 31;
}
}

In a task block, it should record its current mask and current signal priority and each action corresponding to each flags, so we need a fixed array contains ptrs and its priority. After that, we need to record current flag it should implement.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// user/src/lib.rs

/// Action for a signal
#[repr(C, align(16))]
#[derive(Debug, Clone, Copy)]
pub struct SignalAction {
pub handler: usize,
pub mask: SignalFlags,
}

// os/src/task/signal.rs

pub const MAX_SIG: usize = 31;

// os/src/task/action.rs

#[derive(Clone)]
pub struct SignalActions {
pub table: [SignalAction; MAX_SIG + 1],
}

// os/src/task/task.rs

pub struct TaskControlBlockInner {
...
pub handling_sig: isize,
// priority allowed
pub signals: SignalFlags,
// priority forbidden
pub signal_mask: SignalFlags,
pub signal_actions: SignalActions,
...
}

Then our task know which signal should be implemented, which should be omitted.


Signal Handle

Recall that, each process should receive signal and trap into possible level, some may be in S-level, some may be in U-level. And some of them may be illegal or atrocious that we should stop or frozen to wait. If so, we should backup our trap_ctx, because handler contains different environement.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
// os/src/task/task.rs

pub struct TaskControlBlockInner {
...
pub killed: bool,
pub frozen: bool,
pub handling_sig: isize,
pub trap_ctx_backup: Option<TrapContext>,
...
}

// os/src/task/mod.rs

// Some signals are severe and handled by kernel.
fn call_kernel_signal_handler(signal: SignalFlags) {
let task = current_task().unwrap();
let mut task_inner = task.inner_exclusive_access();
match signal {
SignalFlags::SIGSTOP => {
task_inner.frozen = true;
task_inner.signals ^= SignalFlags::SIGSTOP;
}
SignalFlags::SIGCONT => {
if task_inner.signals.contains(SignalFlags::SIGCONT) {
task_inner.signals ^= SignalFlags::SIGCONT;
task_inner.frozen = false;
}
}
_ => {
// println!(
// "[K] call_kernel_signal_handler:: current task sigflag {:?}",
// task_inner.signals
// );
task_inner.killed = true;
}
}
}

// Some signals are normal and handled by user.
fn call_user_signal_handler(sig: usize, signal: SignalFlags) {
let task = current_task().unwrap();
let mut task_inner = task.inner_exclusive_access();

let handler = task_inner.signal_actions.table[sig].handler;
if handler != 0 {
// register signal into task
task_inner.handling_sig = sig as isize;
task_inner.signals ^= signal;

// backup
let mut trap_ctx = task_inner.get_trap_cx();
task_inner.trap_ctx_backup = Some(*trap_ctx);

// modify current trap for our event handler
trap_ctx.sepc = handler;
trap_ctx.x[10] = sig;
} else {
// default action
println!("[K] task/call_user_signal_handler: default action: ignore it or kill process");
}
}

Based on this, we could check our pending signal based on priority of signals, signal_mask of task and signal_mask of signal itself of table.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
// os/src/task/mod.rs

fn check_pending_signals() {
for sig in 0..(MAX_SIG + 1) {
let task = current_task().unwrap();
let task_inner = task.inner_exclusive_access();
let signal = SignalFlags::from_bits(1 << sig).unwrap();
if task_inner.signals.contains(signal) && (!task_inner.signal_mask.contains(signal)) {
let mut masked = true;
let handling_sig = task_inner.handling_sig;
if handling_sig == -1 {
masked = false;
} else {
let handling_sig = handling_sig as usize;
if !task_inner.signal_actions.table[handling_sig]
.mask
.contains(signal)
{
masked = false;
}
}
if !masked {
drop(task_inner);
drop(task);
if signal == SignalFlags::SIGKILL
|| signal == SignalFlags::SIGSTOP
|| signal == SignalFlags::SIGCONT
|| signal == SignalFlags::SIGDEF
{
// signal is a kernel signal
call_kernel_signal_handler(signal);
} else {
// signal is a user signal
call_user_signal_handler(sig, signal);
return;
}
}
}
}
}

Then record a loop function to handle repeatedly while changing the state of task.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// os/src/task/mod.rs

pub fn handle_signals() {
loop {
check_pending_signals();
let (frozen, killed) = {
let task = current_task().unwrap();
let task_inner = task.inner_exclusive_access();
(task_inner.frozen, task_inner.killed)
};
if !frozen || killed {
break;
}
suspend_current_and_run_next();
}
}

System Operation

Finally, we will design sys operations to construct interface.

  • procmask: set mask of current process
  • sigaction: set handler of a signal of current process and move original handler to our input old_action ptr.
  • kill: current process send signal to the other
  • sigreturn: clear current signal and back to original trap state

We will construct it one by one.

procmask is simple, we just set it directly and return original one.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// os/src/process.rs

pub fn sys_sigprocmask(mask: u32) -> isize {
if let Some(task) = current_task() {
let mut inner = task.inner_exclusive_access();
let old_mask = inner.signal_mask;
if let Some(flag) = SignalFlags::from_bits(mask) {
inner.signal_mask = flag;
old_mask.bits() as isize
} else {
-1
}
} else {
-1
}
}

sigaction is a bit harder but still easy, however, notice that the ptr may be null.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// os/src/syscall/process.rs

fn check_sigaction_error(signal: SignalFlags, action: usize, old_action: usize) -> bool {
if action == 0
|| old_action == 0
|| signal == SignalFlags::SIGKILL
|| signal == SignalFlags::SIGSTOP
{
true
} else {
false
}
}

pub fn sys_sigaction(
signum: i32,
action: *const SignalAction,
old_action: *mut SignalAction,
) -> isize {
let token = current_user_token();
let task = current_task().unwrap();
let mut inner = task.inner_exclusive_access();
if signum as usize > MAX_SIG {
return -1;
}
if let Some(flag) = SignalFlags::from_bits(1 << signum) {
if check_sigaction_error(flag, action as usize, old_action as usize) {
return -1;
}
let prev_action = inner.signal_actions.table[signum as usize];
*translated_refmut(token, old_action) = prev_action;
inner.signal_actions.table[signum as usize] = *translated_ref(token, action);
0
} else {
-1
}
}

kill is simple, we will extract the task from pid, and insert flag to it if there’s no flag has been set.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// os/src/syscall/process.rs

fn check_sigaction_error(signal: SignalFlags, action: usize, old_action: usize) -> bool {
if action == 0
|| old_action == 0
|| signal == SignalFlags::SIGKILL
|| signal == SignalFlags::SIGSTOP
{
true
} else {
false
}
}

pub fn sys_sigaction(
signum: i32,
action: *const SignalAction,
old_action: *mut SignalAction,
) -> isize {
let token = current_user_token();
let task = current_task().unwrap();
let mut inner = task.inner_exclusive_access();
if signum as usize > MAX_SIG {
return -1;
}
if let Some(flag) = SignalFlags::from_bits(1 << signum) {
if check_sigaction_error(flag, action as usize, old_action as usize) {
return -1;
}
let prev_action = inner.signal_actions.table[signum as usize];
*translated_refmut(token, old_action) = prev_action;
inner.signal_actions.table[signum as usize] = *translated_ref(token, action);
0
} else {
-1
}
}

sigreturn is simple, because we only need to restore our backup one.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// os/src/syscall/process.rs

pub fn sys_sigreturn() -> isize {
if let Some(task) = current_task() {
let mut inner = task.inner_exclusive_access();
inner.handling_sig = -1;
// restore the trap context
let trap_ctx = inner.get_trap_cx();
*trap_ctx = inner.trap_ctx_backup.unwrap();
0
} else {
-1
}
}

Phew! We finish our Signal mechanism!

Chapter 8-1

Introduction

As the growth of OS, dispatch resource could be divided to smaller piece for more efficient operations. Now, process can’t satisfied our demand, we want some programs could be implemented in parallel. Then, we introduce Thread.

Therefore, process will be the container of threads, each threads contain its id, state, current instruction ptr, registers, stack. However, it will share data(which means same memory and addr) in the same process. So, we will develop a accompany exclusion mechanism for parallel operations by each threads.

Design Data

Now, clarify our resource dispatch for one thread:

Immutable:

  • kernel stack

Mutable:

  • thread id
  • user stack
  • trap context
  • trap status
  • exit code

Every tasks is a thread unit and contained in one process, so now, process is really a process rather a task, it can owns many tasks.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// os/src/task/process.rs

pub struct ProcessControlBlock {
// immutable
pub pid: PidHandle,
// mutable
inner: UPSafeCell<ProcessControlBlockInner>,
}

pub struct ProcessControlBlockInner {
// ...
pub task_res_allocator: RecycleAllocator,
pub tasks: Vec<Option<Arc<TaskControlBlock>>>,
}

Notice, we should separate user stack and kernel stack, we shouldn’t allocate user stack and kernel stack by same logic. Kernel stack is immutable, we only need its top place for trap context.

Because every thread use the same memory set, so each user stack and its trampoline would be allocated by its thread id. We encapsulate these to TaskUserRes data.

We can see many structure need a id allocation, we could design a general id allocator.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
// os/src/task/id.rs

pub struct RecycleAllocator {
current: usize,
recycled: Vec<usize>,
}

impl RecycleAllocator {
pub fn new() -> Self {
RecycleAllocator {
current: 0,
recycled: Vec::new(),
}
}
pub fn alloc(&mut self) -> usize {
if let Some(id) = self.recycled.pop() {
id
} else {
self.current += 1;
self.current - 1
}
}
pub fn dealloc(&mut self, id: usize) {
assert!(id < self.current);
assert!(
!self.recycled.iter().any(|i| *i == id),
"id {} has been deallocated!",
id
);
self.recycled.push(id);
}
}

Kernel Stack Allocation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
// os/src/task/id.rs

lazy_static! {
static ref KSTACK_ALLOCATOR: UPSafeCell<RecycleAllocator> =
unsafe { UPSafeCell::new(RecycleAllocator::new()) };
}

pub struct KernelStack(pub usize);

/// Return (bottom, top) of a kernel stack in kernel space.
pub fn kernel_stack_position(kstack_id: usize) -> (usize, usize) {
let top = TRAMPOLINE - kstack_id * (KERNEL_STACK_SIZE + PAGE_SIZE);
let bottom = top - KERNEL_STACK_SIZE;
(bottom, top)
}

pub fn kstack_alloc() -> KernelStack {
let kstack_id = KSTACK_ALLOCATOR.exclusive_access().alloc();
let (kstack_bottom, kstack_top) = kernel_stack_position(kstack_id);
KERNEL_SPACE.exclusive_access().insert_framed_area(
kstack_bottom.into(),
kstack_top.into(),
MapPermission::R | MapPermission::W,
);
KernelStack(kstack_id)
}

impl Drop for KernelStack {
fn drop(&mut self) {
let (kernel_stack_bottom, _) = kernel_stack_position(self.0);
let kernel_stack_bottom_va: VirtAddr = kernel_stack_bottom.into();
KERNEL_SPACE
.exclusive_access()
.remove_area_with_start_vpn(kernel_stack_bottom_va.into());
}
}

We will do the same for user stack:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// os/src/config.rs

pub const TRAMPOLINE: usize = usize::MAX - PAGE_SIZE + 1;
pub const TRAP_CONTEXT_BASE: usize = TRAMPOLINE - PAGE_SIZE;

// os/src/task/id.rs

fn trap_cx_bottom_from_tid(tid: usize) -> usize {
TRAP_CONTEXT_BASE - tid * PAGE_SIZE
}

fn ustack_bottom_from_tid(ustack_base: usize, tid: usize) -> usize {
ustack_base + tid * (PAGE_SIZE + USER_STACK_SIZE)
}

Then, TaskUserRes could be allocated with trap and user stack.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// impl TaskUserRes
pub fn alloc_user_res(&self) {
let process = self.process.upgrade().unwrap();
let mut process_inner = process.inner_exclusive_access();
// alloc user stack
let ustack_bottom = ustack_bottom_from_tid(self.ustack_base, self.tid);
let ustack_top = ustack_bottom + USER_STACK_SIZE;
process_inner.memory_set.insert_framed_area(
ustack_bottom.into(),
ustack_top.into(),
MapPermission::R | MapPermission::W | MapPermission::U,
);
// alloc trap_cx
let trap_cx_bottom = trap_cx_bottom_from_tid(self.tid);
let trap_cx_top = trap_cx_bottom + PAGE_SIZE;
process_inner.memory_set.insert_framed_area(
trap_cx_bottom.into(),
trap_cx_top.into(),
MapPermission::R | MapPermission::W,
);
}

Now, combine all things together:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// os/src/task/task.rs

pub struct TaskControlBlock {
// immutable
pub process: Weak<ProcessControlBlock>,
pub kstack: KernelStack,
// mutable
inner: UPSafeCell<TaskControlBlockInner>,
}

pub struct TaskControlBlockInner {
pub res: Option<TaskUserRes>,
pub trap_cx_ppn: PhysPageNum,
pub task_cx: TaskContext,
pub task_status: TaskStatus,
pub exit_code: Option<i32>,
}

Design Data Operation

We still get one task in operation rather process, because it’s the smallest instance unit. However, we need some interface to control process id.

1
2
3
4
5
6
7
pub fn add_task(task: Arc<TaskControlBlock>);
pub fn remove_task(task: Arc<TaskControlBlock>);
pub fn fetch_task() -> Option<Arc<TaskControlBlock>>;

pub fn pid2process(pid: usize) -> Option<Arc<ProcessControlBlock>>;
pub fn insert_into_pid2process(pid: usize, process: Arc<ProcessControlBlock>);
pub fn remove_from_pid2process(pid: usize);

Actually, many thing is same, for example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// os/src/task/process.rs

impl ProcessControlBlock {
pub fn new(elf_data: &[u8]) -> Arc<Self> {
// ...
let pid_handle = pid_alloc();
let process = ...;
let task = Arc::new(TaskControlBlock::new(
Arc::clone(&process),
ustack_base,
true,
));
// initiation of task...

let mut process_inner = process.inner_exclusive_access();
process_inner.tasks.push(Some(Arc::clone(&task)));
drop(process_inner);
insert_into_pid2process(process.getpid(), Arc::clone(&process));
// add main thread to scheduler
add_task(task);
process
}
}

If we fork a process, we only extract the first task which is itself, so no others will be copied.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
pub fn fork(self: &Arc<Self>) -> Arc<Self> {
let child = ...;
parent.children.push(Arc::clone(&child));
let task = Arc::new(TaskControlBlock::new(
Arc::clone(&child),
parent
.get_task(0)
.inner_exclusive_access()
.res
.as_ref()
.unwrap()
.ustack_base(),
// here we do not allocate trap_cx or ustack again
// but mention that we allocate a new kstack here
false,
));
let mut child_inner = child.inner_exclusive_access();
child_inner.tasks.push(Some(Arc::clone(&task)));
drop(child_inner);
...
}

Design System Operation

If we want to create a thread, as a naive designer, we only need the function entry addr, and arguments, yes! That’s it!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
// os/src/syscall/thread.rs

pub fn sys_thread_create(entry: usize, arg: usize) -> isize {
let task = current_task().unwrap();
let process = task.process.upgrade().unwrap();
// create a new thread
let new_task = Arc::new(TaskControlBlock::new(
Arc::clone(&process),
task.inner_exclusive_access().res.as_ref().unwrap().ustack_base,
true,
));
// add new task to scheduler
add_task(Arc::clone(&new_task));
let new_task_inner = new_task.inner_exclusive_access();
let new_task_res = new_task_inner.res.as_ref().unwrap();
let new_task_tid = new_task_res.tid;
let mut process_inner = process.inner_exclusive_access();
// add new thread to current process
let tasks = &mut process_inner.tasks;
while tasks.len() < new_task_tid + 1 {
tasks.push(None);
}
tasks[new_task_tid] = Some(Arc::clone(&new_task));
let new_task_trap_cx = new_task_inner.get_trap_cx();
*new_task_trap_cx = TrapContext::app_init_context(
entry,
new_task_res.ustack_top(),
kernel_token(),
new_task.kstack.get_top(),
trap_handler as usize,
);
(*new_task_trap_cx).x[10] = arg;
new_task_tid as isize
}

Now, sys_exit will receive a exit_code and recycle its resource. Notice, if tid == 0, the thread of process itself will make other sub threads moved to init process.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// pub fn exit_current_and_run_next(exit_code: i32) {
// ...
{
let mut initproc_inner = INITPROC.inner_exclusive_access();
for child in process_inner.children.iter() {
child.inner_exclusive_access().parent = Some(Arc::downgrade(&INITPROC));
initproc_inner.children.push(child.clone());
}
}

let mut recycle_res = Vec::<TaskUserRes>::new();
for task in process_inner.tasks.iter().filter(|t| t.is_some()) {
let task = task.as_ref().unwrap();
remove_inactive_task(Arc::clone(&task));
let mut task_inner = task.inner_exclusive_access();
if let Some(res) = task_inner.res.take() {
recycle_res.push(res);
}
}

sys_waittid will check thread state and recycle if could, return -2 if it has not exited. Why need it? Because sys_exit can’t recycle itself unless the thread of process, other thread can call waittid to remove it from tasks queue, then it will be cleared by rust!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// os/src/syscall/thread.rs

/// thread does not exist, return -1
/// thread has not exited yet, return -2
/// otherwise, return thread's exit code
pub fn sys_waittid(tid: usize) -> i32 {
let task = current_task().unwrap();
let process = task.process.upgrade().unwrap();
let task_inner = task.inner_exclusive_access();
let mut process_inner = process.inner_exclusive_access();
// a thread cannot wait for itself
if task_inner.res.as_ref().unwrap().tid == tid {
return -1;
}
let mut exit_code: Option<i32> = None;
let waited_task = process_inner.tasks[tid].as_ref();
if let Some(waited_task) = waited_task {
if let Some(waited_exit_code) = waited_task.inner_exclusive_access().exit_code {
exit_code = Some(waited_exit_code);
}
} else {
// waited thread does not exist
return -1;
}
if let Some(exit_code) = exit_code {
// dealloc the exited thread
process_inner.tasks[tid] = None;
exit_code
} else {
// waited thread has not exited
-2
}
}

Chapter 8-2

Introduction

We will develop exclusion mechanism previously mentioned.

Beside construction, we need to abstract possible situation of data sharing. A usual native thought is a thread want to modify one thing but due to thread switch, the data is already modified and we get wrong result. So based on this, we want a operation to be Atomic, which means the operation excluding others. Now we can alleviate this restriction and generalize this.

Generalization:

  • Allow multiple but finite thread can join one atomic operation.
  • Allow condition of atomic operation.

Before such generalization, we want a way to represent atomic operation. We call the content of this operation Critical Section, and multiple threads operations in indeterminate time sequence Race Condition. So the basic problem of data sharing push us to identify multiple different operations by different threads, we can’t restrict data because the problem is on modification by threads, we need to Lock operations!

So, it there’s a lock sharing by threads, each threads can declare Lock it!, and no other threads can access this thread again.

Now, back to our generalization. If this lock has a bound of access number, many can access until reaching a bound. That’s also a reasonable design, we call this Semaphore; If this lock has a signal which one thread can send it to others to allow others to access it, That’s also a reasonable design, we call this Condition Variable.

If the real minimal sharing thing is Lock rather than data, we can discard so called data problem, and focus on lock itself, each threads can do anything in this lock and excluding others.

Design

No matter which kinds of lock, this is shared among threads.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
pub struct ProcessControlBlock {
// immutable
pub pid: PidHandle,
// mutable
inner: UPSafeCell<ProcessControlBlockInner>,
}
pub struct ProcessControlBlockInner {
...
pub lock_list: Vec<Option<Arc<Lock>>>,
}

pub struct Lock {
pub inner: UPSafeCell<LockInner>,
}
pub struct LockInner {
pub data: ...
pub wait_queue: VecDeque<Arc<TaskControlBlock>>,
}

In such design, one lock can push one thread to wait_queue to stop it, and pop front to start it. data is a generalization for various locks.

Then, in one process, it owns many locks used in various conditions, one can easily take it as a generalization of many data(actually nothing related to real data) we want to share.

Basic Lock

Now, we want to construct a basic lock allowing simple lock, unlock operation.

1
2
3
4
pub trait Mutex: Sync + Send {
fn lock(&self);
fn unlock(&self);
}

Usually, there’s U-level, M-level, S-level implementation. First, we gonna try first one easily, knowing the heuristic design of M-level, and extend basic thought to S-level.


U-level

A naive approach is to declare a global boolean indicating block state. lock will wait if the boolean is true and try to set it to true, and unlock will set it to false to release.

1
2
3
4
5
6
7
8
9
10
static mut mutex :i32 = 0;

fn lock(mutex: i32) {
while (mutex);
mutex = 1;
}

fn unlock(mutex: i32){
mutex = 0;
}

However, that’s wrong! We can’t construct lock by things we want to lock! Threads can jump in any instructions and break it! That’s means we can’t do it in U-level? We should ponder further in real situation, imagine two threads modify one thing in nearly same time, if we could set two global state in a operation that excluding each other(for example, one state set to 1 and another state set to 0), then only one operation can really be implemented and we can check this condition, allow it to get the lock.

1
2
3
4
5
6
7
8
9
10
11
12
static mut flag : [i32;2] = [0,0]; // 哪个线程想拿到锁?
static mut turn : i32 = 0; // 排号:轮到哪个线程? (线程 0 or 1?)

fn lock() {
flag[self] = 1; // 设置自己想取锁 self: 线程 ID
turn = 1 - self; // 设置另外一个线程先排号
while ((flag[1-self] == 1) && (turn == 1 - self)); // 忙等
}

fn unlock() {
flag[self] = 0; // 设置自己放弃锁
}

Now analyze the code, we find that no matter which flag is 1, or both 1, indicating certain thread want to get lock, turn will be a excluding state to flag, which means if another thread modify turn in same time, the turn can only be in one of the state and only one thread can get the lock.


M-level

Is there any predefined operation in instructions that is atomic? Then we can use it as a lock. The answer is Yes, in RISC-V, it’s:

  • AMO: Atomic memory operation
  • LR/SC: Load Reserved/Store Conditional

AMO: will read the value in memory and write new value, then store the old value to target register(s.t. amoadd.w rd, rs2, (rs1)).

LR/SC: LR will read memory and store in target register, and leave the addr of this memory, then SC could check the addr and write data to this addr, output a condition(0/1) to target register.(s.t. lr.w rd, (rs1), sc.w rd, rs2, (rs1))

We can use it to implement a atomic function:

1
2
3
4
5
6
7
8
9
10
# RISC-V sequence for implementing a TAS  at (s1)
li t2, 1 # t2 <-- 1
Try: lr t1, s1 # t1 <-- mem[s1] (load reserved)
bne t1, x0, Try # if t1 != 0, goto Try:
sc t0, s1, t2 # mem[s1] <-- t2 (store conditional)
bne t0, x0, Try # if t0 !=0 ('sc' Instr failed), goto Try:
Locked:
... # critical section
Unlock:
sw x0,0(s1) # mem[s1] <-- 0

Here the logic of Try is mem[s1] would be zero if it’s unlocked, and would be non-zero if it’s locked. So, Try will compare t1 and x0, actually mem[s1] and 0, if equal to zero, then try to store t2 into s1, if succeed, it will compare it again for the output signal t0 and x0, actually the output signal and 0, if succeed, it will jump out otherwise repeat.In this process, if the write operation failed, t0 would be non-zero, and repeat in Try.

If we want to Unlock, we write x0 to s1 to set mem[s1] to zero. Which is the unlocked state.

S-level

Then we could take the function to rust and package it. A simple refactor is when we in repetition loop, we yield, and give CPU to others.


Now, for any kinds of locks, we could apply it to our structure.

First, when we create a lock, we create and push it to list or set in empty element.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// os/src/syscall/sync.rs
pub fn sys_mutex_create(blocking: bool) -> isize {
let process = current_process();
let mutex: Option<Arc<dyn Mutex>> = if !blocking {
Some(Arc::new(MutexSpin::new()))
} else {
Some(Arc::new(MutexBlocking::new()))
};
let mut process_inner = process.inner_exclusive_access();
if let Some(id) = process_inner
.mutex_list
.iter()
.enumerate()
.find(|(_, item)| item.is_none())
.map(|(id, _)| id)
{
process_inner.mutex_list[id] = mutex;
id as isize
} else {
process_inner.mutex_list.push(mutex);
process_inner.mutex_list.len() as isize - 1
}
}

When we call lock, we should provide corresponding id of the lock, if it’s already locked, we push to wait_queue, else we lock it and goes on.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// os/src/syscall/sync.rs
pub fn sys_mutex_lock(mutex_id: usize) -> isize {
let process = current_process();
let process_inner = process.inner_exclusive_access();
let mutex = Arc::clone(process_inner.mutex_list[mutex_id].as_ref().unwrap());
drop(process_inner);
drop(process);
mutex.lock();
0
}
// os/src/sync/mutex.rs
impl Lock for MutexBlocking {
fn lock(&self) {
let mut mutex_inner = self.inner.exclusive_access();
if ... {
mutex_inner.wait_queue.push_back(current_task().unwrap());
// ... other operations
drop(mutex_inner);
block_current_and_run_next();
} else {
// ... other operations
}
}
}

Same reverse operation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// os/src/syscall/sync.rs
pub fn sys_mutex_unlock(mutex_id: usize) -> isize {
let process = current_process();
let process_inner = process.inner_exclusive_access();
let mutex = Arc::clone(process_inner.mutex_list[mutex_id].as_ref().unwrap());
drop(process_inner);
drop(process);
mutex.unlock();
0
}
// os/src/sync/mutex.rs
impl Mutex for MutexBlocking {
fn unlock(&self) {
let mut mutex_inner = self.inner.exclusive_access();
// ... other operation
if ... {
if let Some(waking_task) = mutex_inner.wait_queue.pop_front() {
add_task(waking_task);
}
}
}
}

Semaphore

It’s simple, we only need to switch boolean to number and check the bound. So, the initiated count is the bound, if one thread access, it will minus one, and release, add one. We only need to check positive or negative.

Apply our structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pub fn up(&self) {
let mut inner = self.inner.exclusive_access();
inner.count += 1;
if inner.count <= 0 {
if let Some(task) = inner.wait_queue.pop_front() {
add_task(task);
}
}
}

pub fn down(&self) {
let mut inner = self.inner.exclusive_access();
inner.count -= 1;
if inner.count < 0 {
inner.wait_queue.push_back(current_task().unwrap());
drop(inner);
block_current_and_run_next();
}
}

If the initiated count equal to 1, we back to mutex!, which indicates sole thread access!

Actually, we could use it for synchronization operation, we set count to 0, if one thread access, it will be blocked, and another thread will could release and add one to count, then the original thread finally could access. Then the second thread will always be advanced to first one.

Here, the first is always advanced to second.

1
2
3
4
5
6
7
8
9
10
11
12
13
const SEM_SYNC: usize = 0; //信号量ID
unsafe fn first() -> ! {
sleep(10);
println!("First work and wakeup Second");
semaphore_up(SEM_SYNC); //信号量V操作
exit(0)
}
unsafe fn second() -> ! {
println!("Second want to continue,but need to wait first");
semaphore_down(SEM_SYNC); //信号量P操作
println!("Second can work now");
exit(0)
}

Conditional Variable

If we want one thread owns the ability of release lock for others, we need the CondVar. We have to dispatch operation in wait_queue, if one thread signal others, it will pop out a thread, which means trigger it You are free!. And if one thread wait, it will push itself to queue to wait, The unlock and lock is important because in wait operation, it allow other thread to modify condition, but it should be after of the push operation, in case that the signal is before the push, then we can never receive the signal again! We won’t encapsulate condition check to CondVar because it should leave to user to design it, we only leave out interface for user.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
pub fn signal(&self) {
let mut inner = self.inner.exclusive_access();
if let Some(task) = inner.wait_queue.pop_front() {
add_task(task);
}
}
pub fn wait(&self, mutex: Arc<dyn Mutex>) {
let mut inner = self.inner.exclusive_access();
inner.wait_queue.push_back(current_task().unwrap());
drop(inner);
mutex.unlock();
block_current_and_run_next();
mutex.lock();
}

However, if condition check is leave out to user, we can’t ensure the condition be violated due to data sharing, so usually we need to append mutex lock for this section.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
static mut A: usize = 0;   //全局变量

const CONDVAR_ID: usize = 0;
const MUTEX_ID: usize = 0;

unsafe fn first() -> ! {
sleep(10);
println!("First work, Change A --> 1 and wakeup Second");
mutex_lock(MUTEX_ID);
A=1;
condvar_signal(CONDVAR_ID);
mutex_unlock(MUTEX_ID);
...
}
unsafe fn second() -> ! {
println!("Second want to continue,but need to wait A=1");
mutex_lock(MUTEX_ID);
while A==0 {
condvar_wait(CONDVAR_ID, MUTEX_ID);
}
mutex_unlock(MUTEX_ID);
...
}

We can see that if A=1, second won’t wait repeatly, and goes out.

Day-1

Component Kernel

Based on experiment, we will construct kernel in increment by demand.

  • UniKernel: Single S-Level, App is within kernel.

Each kernel instance can be considered as a construction based on unikernel.

  • MacroKernel: Manage U-Level with support on multiple apps, process management etc…
  • Hypervisor: Virtual state with restricted communication between U-level and S-level.

Aceros Design

1
2
3
graph TD
App <--> Runtime
Runtime <--> HAL

The design of Aceros is simple, first HAL(axhal) is the abstraction of hardware to initiation trap, stack, MMU, registers based on various architectures. Then Runtime(ax*) will be classified as many components to support various environments, like net, task, fs etc…

Each arrow is reversible, in boot, it will be from bottom to top to initiate App. Then when App call something, it will be from top to bottom to evoke functionality.

In real situation, we choose thing based on features.

1
2
3
4
5
6
7
8
graph TD
App --> axstd
axstd --> |axfeat| aceros_api
aceros_api --> axruntime
axruntime -->|alloc| axalloc
axruntime --> axhal
axruntime -->|irq| irq
axruntime -->|multitask| axtask

Day-2

Paging

We delve into paging.

Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
/// Physical address for pflash#1
const PFLASH_START: usize = 0x2200_0000;

#[cfg_attr(feature = "axstd", no_mangle)]
fn main() {
// Makesure that we can access pflash region.
let va = phys_to_virt(PFLASH_START.into()).as_usize();
let ptr = va as *const u32;
unsafe {
println!("Try to access dev region [{:#X}], got {:#X}", va, *ptr);
let magic = mem::transmute::<u32, [u8; 4]>(*ptr);
println!("Got pflash magic: {}", str::from_utf8(&magic).unwrap());
}
}

PFlash is the simulation of flash memory of qemu. When qemu boot, it will automatically load file to fixed MMIO, and can be directly accessed.

Paging: feature = ["paging"] is the way to evoke virtual memory management tu support MMIO. Located in axruntime.

The workflow would be:

  • qemu fdt: from 0x0c00_0000 to 0x3000_0000. Construct the space of device.
  • SBI: from 0x8000_0000 to 0x8020_0000. RISC-V Supervisor Binary Interface, it construct a interface for programming language to manipulate device level things.
  • Kernel Image: from 0x8020_0000. _skernel contains S-level things like static data, code etc… _ekernel is user thing.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#[link_section = ".data.boot_page_table"]
static mut BOOT_PT_SV39: [u64; 512] = [0; 512];

unsafe fn init_boot_page_table() {
// 0x8000_0000..0xc000_0000, VRWX_GAD, 1G block
BOOT_PT_SV39[2] = (0x80000 << 10) | 0xef;
// 0xffff_ffc0_8000_0000..0xffff_ffc0_c000_0000, VRWX_GAD, 1G block
// shift 10 bits to store flags
BOOT_PT_SV39[0x102] = (0x80000 << 10) | 0xef;
}

unsafe fn init_mmu() {
let page_table_root = BOOT_PT_SV39.as_ptr() as usize;
satp::set(satp::Mode::Sv39, 0, page_table_root >> 12);
riscv::asm::sfence_vma_all();
}

Each entry of page table will map 1G(0x4000_0000) memory. From 0x8000_0000 to 0xc0000_0000 at pgd_idx = 2 to 0xffff_ffc0_8000_0000 to 0xffff_ffc0_c000_0000 at pgd_idx = 102. This will map to a bigger range.

Task

Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
let worker = thread::spawn(move || {
println!("Spawned-thread ...");

// Makesure that we can access pflash region.
let va = phys_to_virt(PFLASH_START.into()).as_usize();
let ptr = va as *const u32;
let magic = unsafe {
mem::transmute::<u32, [u8; 4]>(*ptr)
};
if let Ok(s) = str::from_utf8(&magic) {
println!("Got pflash magic: {s}");
0
} else {
-1
}
});

Each task will be in concurrency and dispatched by strategy. If it’s blocked, it will be moved to wait_queue to wait. If it’s ready, it will be moved to run_queue which is scheduler to be dispatched.

Message Communication

Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
let q1 = Arc::new(SpinNoIrq::new(VecDeque::new()));
let q2 = q1.clone();

let worker1 = thread::spawn(move || {
println!("worker1 ...");
for i in 0..=LOOP_NUM {
println!("worker1 [{i}]");
q1.lock().push_back(i);
// NOTE: If worker1 doesn't yield, others have
// no chance to run until it exits!
thread::yield_now();
}
println!("worker1 ok!");
});

let worker2 = thread::spawn(move || {
println!("worker2 ...");
loop {
if let Some(num) = q2.lock().pop_front() {
println!("worker2 [{num}]");
if num == LOOP_NUM {
break;
}
} else {
println!("worker2: nothing to do!");
// TODO: it should sleep and wait for notify!
thread::yield_now();
}
}
println!("worker2 ok!");
});

Cooperative Scheduling: Each tasks kindly yield themselves or exit otherwise it will block everyone because the power of CPU occupation is ownned by each tasks.

Preemptive Scheduling: Each tasks will be automatically suspended by external condition: No lock, no device access; inner condition: run out of current time slice. We can use a disable_count to record this, even for multiple condition restriction, we can sum them up.

1
2
3
4
5
6
7
8
axhal::irq::register_handler(TIMER_IRQ_NUM, || {
update_timer();
#[cfg(feature = "multitask")]
axtask::on_timer_tick();
});

// Enable IRQs before starting app
axhal::arch::enable_irqs()

on_timer_tick will be trigger in time slice. When time ticker ticks, run_queue will check and suspend task if possible.

We can make it more dynamic. Which means each task has priority and during the implementation of cpu, each task has a vruntime to be dynamically adjusted by init_vruntime + (delta/weight(nice)) where delta and nice are dynamic adjustment number. delta will be incremented by timer, weight(nice) is actually the priority of the task. We ensure that task with lowest vruntime will be placed at top.

Day-3

Device

In common, devices can be separated to FS, Net, Dispaly.

1
2
3
4
5
6
7
8
9
10
11
12
13
/// A structure that contains all device drivers, organized by their category.
#[derive(Default)]
pub struct AllDevices {
/// All network device drivers.
#[cfg(feature = "net")]
pub net: AxDeviceContainer<AxNetDevice>,
/// All block device drivers.
#[cfg(feature = "block")]
pub block: AxDeviceContainer<AxBlockDevice>,
/// All graphics device drivers.
#[cfg(feature = "display")]
pub display: AxDeviceContainer<AxDisplayDevice>,
}

Devices will be initiated in axruntime, where axdriver module will be loaded to seek each device and mount drivers.

In qemu, virtio-mmio will send request to probe driver response otherwise return 0 as non-driver.

Block Driver

Block driver provide interface to write and read block providing IO operations and perennial storage.

Aceros use module axfs, with definition of interface vfs, and concrete implementation of ramfs and devfs.

Monolith

In U-Level, we will separate kernel memory and user memory, allowing user context used for process.

The basic logic would be construct new user space,load file to it and initiate user stack, then spawn user task with app_entry.

The top of page root would be shared as kernel space, and below would be independent as user space.

In user space separation, many kinds of resources can’t be shared as global resources, rather the demand of TaskExt as a reference to those independent resources owned by each user apps.

In TaskInner, we store the ptr of TaskExt by macro declaration of such type.

1
2
3
4
struct AxTask {
...
task_ext_ptr: *mut u8
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
/// Task extended data for the monolithic kernel.
pub struct TaskExt {
/// The process ID.
pub proc_id: usize,
/// The user space context.
pub uctx: UspaceContext,
/// The virtual memory address space.
pub aspace: Arc<Mutex<AddrSpace>>,
}

// It will expanded as a trait implmentation of reference to ptr as the `TaskExt` type.
def_task_ext!(TaskExt)

pub fn spawn_user_task(aspace: Arc<Mutex<AddrSpace>>, uctx: UspaceContext) -> AxTaskRef {
let mut task = TaskInner::new(
|| {
let curr = axtask::current();
let kstack_top = curr.kernel_stack_top().unwrap();
ax_println!(
"Enter user space: entry={:#x}, ustack={:#x}, kstack={:#x}",
curr.task_ext().uctx.get_ip(),
curr.task_ext().uctx.get_sp(),
kstack_top,
);
unsafe { curr.task_ext().uctx.enter_uspace(kstack_top) };
},
"userboot".into(),
crate::KERNEL_STACK_SIZE,
);
task.ctx_mut()
.set_page_table_root(aspace.lock().page_table_root());
task.init_task_ext(TaskExt::new(uctx, aspace));
axtask::spawn_task(task)
}

NameSpace

To reuse resources, we will construct a axns_resource section in compilation to form a global namespace. Each will be shared by Arc::new().

If there’s a demand of uniqueness, we will allocate space and copy them.

Page Fault

We could implement lazy allocation of user space memory. We register PAGE_FAULT for our function and call handle_page_fault for AddrSpace.

1
2
3
4
5
6
7
8
9
10
impl AddrSpace
pub fn handle_page_fault(...) -> ...
if let Some(area) = self.areas.find(vaddr) {
let orig_flags = area.flags();
if orig_flags.contains(access_flags) {
return area
.backend()
.handle_page_fault(vaddr, orig_flags, &mut self.pt);
}
}

MemoryArea has two way:

  • Linear: direct construct map relation of memory based on physical contiguous mmemory.
  • Alloc: only construct null-map, and call handle_page_fault to really allocate memory.

User App

ELF is the default format of many apps. Kernel take the responsibility to load app to correct region.

Notice the offset of file and virtual space may be different due to optimization of ELF.

In order to load apps from linux, we will construct a Posix Api given interface mimic to linux.

Day-4

Hypervisor

A physical computer system can build multiple virtual computer system with its own virtual resources. Just like apps in U-level, each virtual system will consider themselves uniquely occupies these resources.

Emulator like a interpretor to stimulate a virtual system while in loose demand of efficiency.

Hypervisor will execute most instructions directly as a isomorphism of the stimulated virtual system to gain a huge efficiency.

*I type: Each virtual OS is equal on hardware.
*II type
: Virtual OS is on host OS.

Each instance as Guest(OS Image) be loaded on our host os kernel.

Design

Only focus on hypervisor(I type).

Levels are extended, because we need to separate host and guest, so U-Level become U, VU-Level. So does the kernel level because we need to separate host, the hypervisor and guest, the virtual OS. So S-Level become HS, VS-Level.

Communication

Instructions will be implemented in communication of HS and VS, when there’s a sbi-call, VS will communicate HS to implement.

In hstatus of RISC-V, design the virtualization mode:

SPV: the source of HS or VS, which determines the sret to VU or U.
SPVP: the permission of modification of memory that HS to V.

We need to store guest context and host context, then switch between ret(VM-Exit) and sret. We implement this by run_guest and guest_exit which both is the other’s reverse.


Timer will be injected to sbi-call by setting a virtual clock in VS, when set timer, we clear timer of guest and set timer of host; when interrupt, we set clear timer of host and set timer of guest waiting for next request of timer.


Memory will be separated based on guest and host too. GVA will be a map of GPA as guest memory. However, HPA take responsibility of handle GPA as the virtualization process.


Dev will record each start vaddr and when VM-Exit of PageFault, it will findvmdevs.find(addr) and call handle_mmio for corresponding request.

起点

十多年前的一个下午,我坐在电脑前,刷完了所有周常任务。游戏公会里最后一位成员下线回老家吃饭。连续在线十几个小时后,我退出游戏、关掉电源,一股巨大的空虚感突然涌上心头。

我毕业于青岛一所专科院校计算机专业,从达内培训班结业后,进入一家对日外包公司,维护着1995年开发的VB6.0程序。在青岛,月薪三千元省吃俭用才能勉强度日。我总觉得自己的能力不止于此,人生不该是这副模样。

想要改变却毫无方向,既缺资源又缺机会。

后来在网上翻帖,听说《计算机程序设计艺术》是计算机领域的圣经。这位没有方向的年轻人,决定啃下这部巨著。它的作者是图灵奖得主、算法领域先驱、被誉为”算法之父”的Donald E. Knuth。为配合本书,作者甚至专门编写了配套教材《具体数学》。我买来这本书,却只坚持读了一天就放弃了。

找不到《具体数学》的公开课,偶然发现其目录与离散数学相似。得知清华大学在学堂在线开设《组合数学》课程,我立即报名并完成学习。同时在知乎了解到,若想补充编译原理知识,可学习《SICP》,便买来这本书跟着上古录像学了前三章。

这两门课让我获得薪资六倍的offer,从青岛搬到上海加入创业公司。如今从业十余年,从外包程序员到前端/全栈工程师,再到系统架构师,甚至担任过CTO。相较于起点,这或许算得上逆袭。

但命运的齿轮究竟何时开始转动?

是关闭游戏感到空虚的那个下午?还是买下《具体数学》翻开扉页的那天?

都不是。我始终认为,命运的齿轮真正转动始于报名《组合数学》课程的那一刻。

现在问参加操作系统训练营的同学们:你们是否已感受到命运齿轮开始转动?

通关攻略

当今社会被资本构建的信息茧房笼罩,每天被Python编程课广告轰炸,甚至上万的AI课程、数千元的知识星球…所谓大厂高P讲师们坐享时代红利,利用信息不对称收割韭菜,教材却是东拼西凑、充斥大量错误和误导的八股文。

而一流大学的优质课程免费开放却鲜为人知。恭喜诸位突破信息茧房,本次训练营依托清华大学操作系统课程,配有专业师资团队。

登录官网可见先修要求:

  • Rust语言基础
  • 计算机组成与设计
  • RISC-V 汇编特权指令集

但作为注重实践的训练营,最终考核是完成操作系统实现并在QEMU验证。还有一些官网未提及的隐藏门槛:

  • Git:是否熟练掌握commit/checkout/branch/merge/rebase等操作?
  • Linux:是否有使用经验并熟悉常用命令?
  • 英语:能否阅读英文文档(即使借助翻译工具)?
  • 编码与调试:是否具备万行代码经验,能否将代码拆分到多个函数和文件?
  • 网络:是否能无障碍访问国际互联网,并为开发工具配置代理或镜像源?
  • 提问:能否精准描述问题并明确所需帮助?

每项技能至少需要10小时学习投入。若缺少其中任何一项,实验过程中将需要付出更多时间精力来补足。

总结

感谢陈渝、向勇、石磊等教授开设课程,感谢李明老师推广宣传,感谢助教团队的付出。这门课程:

  • 于国:培养操作系统人才,助力科技自立自强
  • 于民:促进教育公平,让教育资源匮乏地区的学生获得一流课程
  • 于我:这个专科生终于能与985/211学生站在同一起跑线。过去总抱怨命运不公,如今证明自己的机会就在眼前——我只需全力以赴。

2025OS训练营一阶段记录

说明一下,我是提前为训练营做准备,所以记录时间比开营要早,毕竟以我这种基础不笨鸟先飞怕跟不上哈哈。

3.12

今天开始学Rust,之前对这个语言也没咋接触过,但是之前学过点go,都是函数式编程,应该有相通处吧。

我找了经验贴都说看course.rs的Rust圣经,于是按照提示搭建了下环境,我用的是ubuntu20.04,装起环境没有碰到什么问题。还跟着作者用vscode,装了几个推荐的插件。

了解了cargo 跑了下helloworld,之前写c++的,每次都被环境折腾的不行,找不到的包还要自己手动编译,而且是win环境下…不会写makefile还要用cmake,这东西更新还快,每种库外部文件导入方式还不一样,可能被折磨习惯了觉得这很正常。原来现在的编程语言能做到这么厉害,不仅不用自己编译,手动导入,连库的可用性也会检测。而且rust还有媲美c++的性能,太厉害了!编译的方式也非常简单,toml的依赖方式也比任何语言的依赖方式都要简单。Rust 原生支持 UTF-8 编码的字符串,可以很容易的使用世界各国文字作为字符串内容,再也不用被msvc的字符集问题折腾了。{}作为占位符,数字、字符串、结构体都能打印。rust有很好的链式编程特性,标准库的函数熟练运用起来应该非常方便优雅。

第一章变量绑定与解构。变量默认不可变 :Rust变量默认不能修改,必须加mut才能变成可变变量。这个设计挺特别的,一开始不习惯但确实能避免很多意外修改的bug。变量遮蔽:可以用同名变量覆盖之前的变量,实际上是创建了新变量。和mut的区别在于会创建新内存空间,还能用来改变变量类型。解构赋值 :可以从元组、结构体等复杂类型中提取值赋给变量,写法很简洁。常量 :用const声明,必须指定类型,命名习惯是全大写加下划线。命名规范 :下划线开头的变量名可以避免未使用变量的警告。

整体感觉Rust的变量系统设计得很严谨,虽然开始有点不习惯,但这些特性确实能写出更安全的代码。特别是默认不可变这个设计,强迫开发者想清楚哪些变量真的需要修改。

第二章基本类型。基本类型与c++没什么差别,特别的就是单元类型 () ,其唯一的值也是 ()。rust编译器必须在编译期知道我们所有变量的类型,但这不意味着需要为每个变量指定类型,它可以根据变量的值和上下文中的使用方式来自动推导出变量的类型,在某些情况下,它无法推导出变量类型,需要手动去给予一个类型标注。在整数上,在release 模式构建时,Rust 不检测溢出。相反,当检测到整型溢出时,Rust 会按照补码循环溢出(two’s complement wrapping)的规则处理,也有一些函数用来检查溢出。在处理浮点数计算时要相当注意。NaN为数学上为定义,与之交互都会成为NaN。

我注意到range for i in 1..=5这样的方式非常有意思和方便,还可以用字母。Rust 拥有相当多的数值类型. 需要熟悉这些类型所占用的字节数,这样就知道该类型允许的大小范围以及选择的类型是否能表达负数。类型转换必须是显式的. Rust 永远也不会偷偷把你的 16bit 整数转换成 32bit 整数。

Rust 的函数体是由一系列语句组成,最后由一个表达式来返回值,需要区分语句和表达式,表达式总要返回值,分号结尾的是语句。

当用 ! 作函数返回类型的时候,表示该函数永不返回,这种语法往往用做会导致程序崩溃的函数。

3.13

今天学习基础第三章所有权和借用。为了解决内存安全问题,rust提出所有权概念,所有权三条规则:Rust 中每一个值都被一个变量所拥有,该变量被称为值的所有者;一个值同时只能被一个变量所拥有,或者说一个值只能拥有一个所有者;当所有者(变量)离开作用域范围时,这个值将被丢弃(drop)。

1
2
let x = 5;
let y = x;

这段代码并没有发生所有权的转移,原因很简单: 代码首先将 5 绑定到变量 x,接着拷贝 x 的值赋给 y,最终 xy 都等于 5,因为整数是 Rust 基本数据类型,是固定大小的简单值,因此这两个值都是通过自动拷贝的方式来赋值的,都被存在栈中,完全无需在堆上分配内存。

1
2
let s1 = String::from("hello");
let s2 = s1;

当 s1 被赋予 s2 后,Rust 认为 s1 不再有效,因此也无需在 s1 离开作用域后 drop 任何东西,这就是把所有权从 s1 转移给了 s2,s1 在被赋予 s2 后就马上失效了。其实类似于c++里的move。如果不想夺取原有的所有权可以使用clone(),rust基本类型自带clone属性。

Rust通过借用(Borrowing) 获取变量的引用,称之为借用。引用分为可变引用和不可变引用,区别就是可不可变,这种限制的好处就是使 Rust 在编译期就避免数据竞争,两个或更多的指针同时访问同一数据,至少有一个指针被用来写入数据,没有同步数据访问的机制。可变引用和不可变引用不可以同时存在,可变引用可以同时存在一个,不可变引用可以同时存在多个。

基础第四章复合类型。&s[0..len]切片操作很方便得用到string,字节序等,0..len为range类,左闭右开,在处理字节流时要注意,汉字在utf8占三个字节。字符串字面量也是个切片。

元组是由多种类型组合到一起形成的,因此它是复合类型,元组的长度是固定的,元组中元素的顺序也是固定的。可以通过以下语法创建一个元组:

1
let tup: (i32, f64, u8) = (500, 6.4, 1);

变量 tup 被绑定了一个元组值 (500, 6.4, 1),该元组的类型是 (i32, f64, u8),元组是用括号将多个类型组合到一起,可以使用模式匹配或者 . 操作符来获取元组中的值。

用模式匹配解构元组:

1
let (x, y, z) = tup;

struct结构体和c语言类似,访问字段用 . ,初始化实例时,每个字段都需要进行初始化,初始化时的字段顺序不需要和结构体定义时的顺序一致。结构体中具有所有权的字段转移出去后,将无法再访问该字段,但是可以正常访问其它的字段,

结构体必须要有名称,但是元组结构体的字段可以没有名称,例如:

1
2
struct Color(i32, i32, i32);
let origin = Point(0, 0, 0);

元组结构体在你希望有一个整体名称,但是又不关心里面字段的名称时将非常有用。

如果想在结构体里包含一个引用,必须声明生命周期。可以为结构体添加debug属性,实现fmt函数用来自定义打印结构体信息。

枚举类比大多语言要强大,每个元素可以是结构体包含不同信息,

1
2
3
4
5
6
enum Message {
Quit,
Move { x: i32, y: i32 },
Write(String),
ChangeColor(i32, i32, i32),
}

Option 枚举用于处理空值,在其它编程语言中,往往都有一个 null 关键字,当null 异常导致程序的崩溃时我们需要处理这些 null 空值。Rust 吸取了众多教训,决定抛弃 null,而改为使用 Option 枚举变量来表述这种结果。Option 枚举包含两个成员,一个成员表示含有值:Some(T), 另一个表示没有值:None,定义如下:

1
2
3
4
enum Option<T> {
Some(T),
None,
}

其中 T 是泛型参数,Some(T)表示该枚举成员的数据类型是 T,换句话说,Some 可以包含任何类型的数据。

在 Rust 中,最常用的数组有两种,第一种是速度很快但是长度固定的 array,第二种是可动态增长的但是有性能损耗的 Vector,在本书中,我们称 array 为数组,Vector 为动态数组。数组的三要素:长度固定、元素必须有相同的类型、依次线性排列。声明方式:

1
let a: [i32; 5] = [1, 2, 3, 4, 5];

3.14

今天学习基础第五章流程控制和第六章模式匹配。流程控制就是for … in while loop很容易。模式匹配还是对c++er比较新颖。match的用法和switch相似,强大的点在于可以同时解构内容如:

1
2
3
4
5
6
match action {
Action::Say(s) => {
println!("{}", s);
},
_ => {},
}

if let 相当于单个匹配的match,同样用于获取并解构。

matches!,它可以将一个表达式跟模式进行匹配,然后返回匹配的结果 true or false

match 和 if let 都可以触发变量遮蔽,就是可以同名变量优先使用当前作用于下的,在处理时很方便。

Option解构在rust经常用到,返回结果要么是Some(T),要么是None,在处理时解构处理不同情况非常强大。

解构方式还有个while let ,可以循环处理解构,数组和元组也可以用对应的结构来解构,_可以用来忽略这个位置的变量,匹配的范围同样可以使用range。

基础第七章方法。可以使用impl 对struct enum实现方法,方法公开需前置pub,方法的函数如若调用结构体的内容需要第一个参数位self,一般情况时&self,在需要对结构体内容更改时为&mut self,当然也可以不是引用,self转让所有权到方法里。在函数里没有结构体的引用时(关联函数)需要使用结构体::来访问,否则一概使用 . 来访问。impl 内部也可以声明变量、常量等。impl 可以多次进行实现。

第八章泛型与特征。在 Rust 中,泛型参数的名称可以任意起,但是出于惯例都用 T ,使用泛型参数,有一个先决条件,必需在使用前对其进行声明:

1
fn largest<T>(list: &[T]) -> T {

该泛型函数的作用是从列表中找出最大的值,其中列表中的元素类型为 T。首先 largest<T> 对泛型参数 T 进行了声明,然后才在函数参数中进行使用该泛型参数 list: &[T] ,函数 largest 有泛型类型 T,它有个参数 list,其类型是元素为 T 的数组切片,最后,该函数返回值的类型也是 T

T 可以是任何类型,但不是所有的类型都能进行比较,编译器建议我们给 T 添加一个类型限制:使用 std::cmp::PartialOrd 特征(Trait)对 T 进行限制。

有时候,编译器无法推断你想要的泛型参数,这时候需要显式地来声明。

1
2
3
4
5
6
7
fn create_and_print<T>() where T: From<i32> + Display {
let a: T = 100.into(); // 创建了类型为 T 的变量 a,它的初始值由 100 转换而来
println!("a is: {}", a);
}
fn main() {
create_and_print::<i64>();
}

结构体和枚举的泛型

1
2
3
4
5
6
7
8
9
10
11
12
13
enum Option<T> {
Some(T),
None,
}
struct Point<T,U> {
x: T,
y: U,
}
impl<T> Point<T> {
fn x(&self) -> &T {
&self.x
}
}

方法中使用泛型:使用泛型参数前要提前声明:impl<T>,这样 Rust 就知道 Point 的尖括号中的类型是泛型而不是具体类型。注意,这里的 Point<T> 不再是泛型声明,而是一个完整的结构体类型,因为我们定义的结构体就是 Point<T> 而不再是 Point。除了结构体中的泛型参数,还能在该结构体的方法中定义额外的泛型参数,就跟泛型函数一样:

1
2
3
4
5
6
7
8
impl<T, U> Point<T, U> {
fn mixup<V, W>(self, other: Point<V, W>) -> Point<T, W> {
Point {
x: self.x,
y: other.y,
}
}
}

const泛型:[i32; 3][i32; 2] 确实是两个完全不同的类型,因此无法用同一个函数调用,const 泛型,也就是针对值的泛型可以用于处理数组长度的问题:

1
2
3
4
5
6
7
8
9
fn display_array<T: std::fmt::Debug, const N: usize>(arr: [T; N]) {
println!("{:?}", arr);
}
fn main() {
let arr: [i32; 3] = [1, 2, 3];
display_array(arr);
let arr: [i32; 2] = [1, 2];
display_array(arr);
}

const fn,即常量函数。通常情况下,函数是在运行时被调用和执行的。然而,在某些场景下,我们希望在编译期就计算出一些值,以提高运行时的性能或满足某些编译期的约束条件。例如,定义数组的长度、计算常量值等。有了 const fn,我们可以在编译期执行这些函数,从而将计算结果直接嵌入到生成的代码中。

特征trait:如果不同的类型具有相同的行为,那么我们就可以定义一个特征,然后为这些类型实现该特征。定义特征是把一些方法组合在一起,目的是定义一个实现某些目标所必需的行为的集合。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
pub trait Summary {
fn summarize(&self) -> String;
}
pub struct Post {
pub title: String, // 标题
pub author: String, // 作者
pub content: String, // 内容
}

impl Summary for Post {
fn summarize(&self) -> String {
format!("文章{}, 作者是{}", self.title, self.author)
}
}

pub struct Weibo {
pub username: String,
pub content: String
}

impl Summary for Weibo {
fn summarize(&self) -> String {
format!("{}发表了微博{}", self.username, self.content)
}
}

关于特征实现与定义的位置,有一条非常重要的原则:如果你想要为类型 A 实现特征 T,那么 A 或者 T 至少有一个是在当前作用域中定义的! 例如可以为上面的 Post 类型实现标准库中的 Display 特征,这是因为 Post 类型定义在当前的作用域中。同时也可以在当前包中为 String 类型实现 Summary 特征,因为 Summary 定义在当前作用域中.

特征中定义具有默认实现的方法,这样其它类型无需再实现该方法,或者也可以选择重载该方法.

使用特征作为函数参数:

1
2
3
pub fn notify(item: &impl Summary) {
println!("Breaking news! {}", item.summarize());
}

impl Summary顾名思义,它的意思是 实现了Summary特征item 参数。

如果想要强制函数的两个参数是同一类型只能使特征约束来实现:

1
pub fn notify<T: Summary>(item1: &T, item2: &T) {}

泛型类型 T 说明了 item1item2 必须拥有同样的类型,同时 T: Summary 说明了 T 必须实现 Summary 特征。

多重约束:除了单个约束条件,可以指定多个约束条件,

1
pub fn notify(item: &(impl Summary + Display)) {}

除了上述的语法糖形式,还能使用特征约束的形式:

1
pub fn notify<T: Summary + Display>(item: &T) {}

特征约束,可以让我们在指定类型 + 指定特征的条件下去实现方法,也可以有条件地实现特征,例如,标准库为任何实现了 Display 特征的类型实现了 ToString 特征。

可以通过 impl Trait 来说明一个函数返回了一个类型,该类型实现了某个特征:

1
2
3
4
5
fn returns_summarizable() -> impl Summary {
Weibo {
。。。
}
}

这种 impl Trait 形式的返回值,在一种场景下非常非常有用,那就是返回的真实类型非常复杂,你不知道该怎么声明时。

#[derive(Debug)] :是一种特征派生语法,被 derive 标记的对象会自动实现对应的默认特征代码,继承相应的功能。例如 Debug 特征,它有一套自动实现的默认代码,当你给一个结构体标记后,就可以使用 println!("{:?}", s) 的形式打印该结构体的对象。

再如 Copy 特征,它也有一套自动实现的默认代码,当标记到一个类型上时,可以让这个类型自动实现 Copy 特征,进而可以调用 copy 方法,进行自我复制。

总之,derive 派生出来的是 Rust 默认给我们提供的特征,在开发过程中极大的简化了自己手动实现相应特征的需求,当然,如果你有特殊的需求,还可以自己手动重载该实现

特征对象

1
2
3
pub trait Draw {
fn draw(&self);
}

只要组件实现了 Draw 特征,就可以调用 draw 方法来进行渲染。假设有一个 ButtonSelectBox 组件实现了 Draw 特征:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
pub struct Button {
pub width: u32,
pub height: u32,
pub label: String,
}

impl Draw for Button {
fn draw(&self) {
// 绘制按钮的代码
}
}

struct SelectBox {
width: u32,
height: u32,
options: Vec<String>,
}

impl Draw for SelectBox {
fn draw(&self) {
// 绘制SelectBox的代码
}
}
fn draw1(x: Box<dyn Draw>) {
// 由于实现了 Deref 特征,Box 智能指针会自动解引用为它所包裹的值,然后调用该值对应的类型上定义的 `draw` 方法
x.draw();
}

fn draw2(x: &dyn Draw) {
x.draw();
}

此时,还需要一个动态数组来存储这些 UI 对象:

1
2
3
pub struct Screen {
pub components: Vec<?>,
}

特征对象**指向实现了 Draw 特征的类型的实例,也就是指向了 Button 或者 SelectBox 的实例,这种映射关系是存储在一张表中,可以在运行时通过特征对象找到具体调用的类型方法。

  • draw1 函数的参数是 Box<dyn Draw> 形式的特征对象,该特征对象是通过 Box::new(x) 的方式创建的
  • draw2 函数的参数是 &dyn Draw 形式的特征对象,该特征对象是通过 &x 的方式创建的
  • dyn 关键字只用在特征对象的类型声明上,在创建时无需使用 dyn
1
2
3
pub struct Screen {
pub components: Vec<Box<dyn Draw>>,
}

其中存储了一个动态数组,里面元素的类型是 Draw 特征对象:Box<dyn Draw>,任何实现了 Draw 特征的类型,都可以存放其中。

再来为 Screen 定义 run 方法,用于将列表中的 UI 组件渲染在屏幕上:

1
2
3
4
5
6
7
impl Screen {
pub fn run(&self) {
for component in self.components.iter() {
component.draw();
}
}
}

泛型是在编译期完成处理的:编译器会为每一个泛型参数对应的具体类型生成一份代码,这种方式是静态分发,因为是在编译期完成的,对于运行期性能完全没有任何影响。与静态分发相对应的是动态分发(dynamic dispatch),直到运行时,才能确定需要调用什么方法。

3.15

第九章集合类型

使用 Vec::new 创建动态数组

1
let v: Vec<i32> = Vec::new();

这里,v 被显式地声明了类型 Vec<i32>,这是因为 Rust 编译器无法从 Vec::new() 中得到任何关于类型的暗示信息,因此也无法推导出 v 的具体类型:

1
2
let mut v = Vec::new();
v.push(1);

此时,v 就无需手动声明类型,因为编译器通过 v.push(1),推测出 v 中的元素类型是 i32,因此推导出 v 的类型是 Vec<i32>

如果预先知道要存储的元素个数,可以使用 Vec::with_capacity(capacity) 创建动态数组,这样可以避免因为插入大量新数据导致频繁的内存分配和拷贝,提升性能

还可以使用宏 vec! 来创建数组,与 Vec::new 有所不同,前者能在创建同时给予初始化值:

1
let v = vec![1, 2, 3];

同样,此处的 v 也无需标注类型,编译器只需检查它内部的元素即可自动推导出 v 的类型是 `Vec

跟结构体一样,Vector 类型在超出作用域范围后,会被自动删除:

1
2
3
4
{
let v = vec![1, 2, 3];
// ...
} // <- v超出作用域并在此处被删除

Vector 被删除后,它内部存储的所有内容也会随之被删除。

同时借用多个数组元素 遇到同时借用多个数组元素的情况

1
2
3
4
let mut v = vec![1, 2, 3, 4, 5];
let first = &v[0];
v.push(6);
println!("The first element is: {first}");

此时编译器报错。数组的大小是可变的,当旧数组的大小不够用时,Rust 会重新分配一块更大的内存空间,然后把旧数组拷贝过来。这种情况下,之前的引用显然会指向一块无效的内存。

1
2
3
4
5
6
7
8
use std::collections::HashMap;

// 创建一个HashMap,用于存储宝石种类和对应的数量
let mut my_gems = HashMap::new();

// 将宝石类型和对应的数量写入表中
my_gems.insert("红宝石", 1);
my_gems.insert("蓝宝石", 2);

3.16

第十章生命周期。

借用检查:为了保证 Rust 的所有权和借用的正确性,Rust 使用了一个借用检查器(Borrow checker)来检查程序的借用正确性。

函数中的生命周期:

1
2
3
4
5
6
7
8
9
10
11
12
13
fn longest(x: &str, y: &str) -> &str {
if x.len() > y.len() {
x
} else {
y
}
}
--------
--> src/main.rs:9:33
|
9 | fn longest(x: &str, y: &str) -> &str {
| ---- ---- ^ expected named lifetime parameter // 参数需要一个生命周期
|

编译器无法知道该函数的返回值到底引用 x 还是 y因为编译器需要知道这些,来确保函数调用后的引用生命周期分析。在存在多个引用时,编译器有时会无法自动推导生命周期,此时就需要手动去标注,通过为参数标注合适的生命周期来帮助编译器进行借用检查的分析。

生命周期标注:生命周期的语法以 ' 开头,名称往往是一个单独的小写字母,大多数人都用 'a 来作为生命周期的名称。 如果是引用类型的参数,那么生命周期会位于引用符号 & 之后,并用一个空格来将生命周期和引用参数分隔开

1
2
3
&i32        // 一个引用
&'a i32 // 具有显式生命周期的引用
&'a mut i32 // 具有显式生命周期的可变引用

一个生命周期标注,它自身并不具有什么意义,因为生命周期的作用就是告诉编译器多个引用之间的关系。例如,有一个函数,它的第一个参数 first 是一个指向 i32 类型的引用,具有生命周期 'a,该函数还有另一个参数 second,它也是指向 i32 类型的引用,并且同样具有生命周期 'a。此处生命周期标注仅仅说明,这两个参数 firstsecond 至少活得和’a 一样久,至于到底活多久或者哪个活得更久,我们都无法得知

1
fn useless<'a>(first: &'a i32, second: &'a i32) {}

函数签名中的生命周期标注

1
2
3
4
5
6
7
fn longest<'a>(x: &'a str, y: &'a str) -> &'a str {
if x.len() > y.len() {
x
} else {
y
}
}
  • 和泛型一样,使用生命周期参数,需要先声明 <'a>
  • xy 和返回值至少活得和 'a 一样久(因为返回值要么是 x,要么是 y

该函数签名表明对于某些生命周期 'a,函数的两个参数都至少跟 'a 活得一样久,同时函数的返回引用也至少跟 'a 活得一样久。实际上,这意味着返回值的生命周期与参数生命周期中的较小值一致:虽然两个参数的生命周期都是标注了 'a,但是实际上这两个参数的真实生命周期可能是不一样的(生命周期 'a 不代表生命周期等于 'a,而是大于等于 'a)。在通过函数签名指定生命周期参数时,并没有改变传入引用或者返回引用的真实生命周期,而是告诉编译器当不满足此约束条件时,就拒绝编译通过

因此 longest 函数并不知道 xy 具体会活多久,只要知道它们的作用域至少能持续 'a 这么长就行。

该例子证明了 result 的生命周期必须等于两个参数中生命周期较小的那个:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
fn main() {
let string1 = String::from("long string is long");
let result;
{
let string2 = String::from("xyz");
result = longest(string1.as_str(), string2.as_str());
}
println!("The longest string is {}", result);
}
error[E0597]: `string2` does not live long enough
--> src/main.rs:6:44
|
6 | result = longest(string1.as_str(), string2.as_str());
| ^^^^^^^ borrowed value does not live long enough
7 | }

在上述代码中,result 必须要活到 println!处,因为 result 的生命周期是 'a,因此 'a 必须持续到 println!

结构体中的生命周期:

1
2
3
struct ImportantExcerpt<'a> {
part: &'a str,
}

ImportantExcerpt 结构体中有一个引用类型的字段 part,因此需要为它标注上生命周期。结构体的生命周期标注语法跟泛型参数语法很像,需要对生命周期参数进行声明 <'a>。该生命周期标注说明,结构体 ImportantExcerpt 所引用的字符串 str 生命周期需要大于等于该结构体的生命周期

生命周期消除:尽管我们没有显式的为其标注生命周期,编译依然可以通过。其实原因不复杂,编译器为了简化用户的使用,运用了生命周期消除大法

三条消除规则:

编译器使用三条消除规则来确定哪些场景不需要显式地去标注生命周期。其中第一条规则应用在输入生命周期上,第二、三条应用在输出生命周期上。若编译器发现三条规则都不适用时,就会报错,提示你需要手动标注生命周期。

  1. 每一个引用参数都会获得独自的生命周期

    例如一个引用参数的函数就有一个生命周期标注: fn foo<'a>(x: &'a i32),两个引用参数的有两个生命周期标注:fn foo<'a, 'b>(x: &'a i32, y: &'b i32), 依此类推。

  2. 若只有一个输入生命周期(函数参数中只有一个引用类型),那么该生命周期会被赋给所有的输出生命周期,也就是所有返回值的生命周期都等于该输入生命周期

    例如函数 fn foo(x: &i32) -> &i32x 参数的生命周期会被自动赋给返回值 &i32,因此该函数等同于 fn foo<'a>(x: &'a i32) -> &'a i32

  3. 若存在多个输入生命周期,且其中一个是 &self&mut self,则 &self 的生命周期被赋给所有的输出生命周期

    拥有 &self 形式的参数,说明该函数是一个 方法,该规则让方法的使用便利度大幅提升。

3.17

方法中的生命周期:

1
2
3
4
5
6
7
8
9
struct ImportantExcerpt<'a> {
part: &'a str,
}

impl<'a> ImportantExcerpt<'a> {
fn level(&self) -> i32 {
3
}
}
  • impl 中必须使用结构体的完整名称,包括 <'a>,因为生命周期标注也是结构体类型的一部分
  • 方法签名中,往往不需要标注生命周期,得益于生命周期消除的第一和第三规则

静态生命周期:生命周期 'static,拥有该生命周期的引用可以和整个程序活得一样久。

字符串字面量是被硬编码进 Rust 的二进制文件中,因此这些字符串变量全部具有 'static 的生命周期:

1
let s: &'static str = "就是活得久,嘿嘿";

Rust 中的错误主要分为两类:

  • 可恢复错误,通常用于从系统全局角度来看可以接受的错误,例如处理用户的访问、操作等错误,这些错误只会影响某个用户自身的操作进程,而不会对系统的全局稳定性产生影响
  • 不可恢复错误,刚好相反,该错误通常是全局性或者系统性的错误,例如数组越界访问,系统启动时发生了影响启动流程的错误等等,这些错误的影响往往对于系统来说是致命的

Rust 提供了 panic! 宏,当调用执行该宏时,程序会打印出一个错误信息,展开报错点往前的函数调用堆栈,最后退出程序

当出现 panic! 时,程序提供了两种方式来处理终止流程:栈展开直接终止

其中,默认的方式就是 栈展开,这意味着 Rust 会回溯栈上数据和函数调用,因此也意味着更多的善后工作,好处是可以给出充分的报错信息和栈调用信息,便于事后的问题复盘。直接终止,顾名思义,不清理数据就直接退出程序,善后工作交与操作系统来负责。

对于绝大多数用户,使用默认选择是最好的,但是当你关心最终编译出的二进制可执行文件大小时,那么可以尝试去使用直接终止的方式,例如下面的配置修改 Cargo.toml 文件,实现在 release 模式下遇到 panic 直接终止:

1
2
[profile.release]
panic = 'abort'

panic后如果是 main 线程,则程序会终止,如果是其它子线程,该线程会终止,但是不会影响 main 线程。

*Result<T, E> *是一个枚举类型,定义如下:

1
2
3
4
enum Result<T, E> {
Ok(T),
Err(E),
}

泛型参数 T 代表成功时存入的正确值的类型,存放方式是 Ok(T)E 代表错误时存入的错误值,存放方式是 Err(E)

不想使用 match 去匹配 Result<T, E>以获取其中的 T 值,因为 match 的穷尽匹配特性,你总要去处理下 Err 分支。有个办法简化这个过程就是 unwrapexpect。它们的作用就是,如果返回成功,就将 Ok(T) 中的值取出来,如果失败,就直接 panic。expectunwrap 很像,也是遇到错误直接 panic, 但是会带上自定义的错误提示信息,相当于重载了错误打印的函数.

1
2
let f = File::open("hello.txt").unwrap();
let f = File::open("hello.txt").expect("Failed to open hello.txt");

错误传播:

1
fn read_username_from_file() -> Result<String, io::Error>
  • 该函数返回一个 Result<String, io::Error> 类型,当读取用户名成功时,返回 Ok(String),失败时,返回 Err(io:Error)
  • File::openf.read_to_string 返回的 Result<T, E> 中的 E 就是 io::Error

3.18

  • 项目(Package):可以用来构建、测试和分享包
  • 工作空间(WorkSpace):对于大型项目,可以进一步将多个包联合在一起,组织成工作空间
  • 包(Crate):一个由多个模块组成的树形结构,可以作为三方库进行分发,也可以生成可执行文件进行运行
  • 模块(Module):可以一个文件多个模块,也可以一个文件一个模块,模块可以被认为是真实项目中的代码组织单元

包 Crate

对于 Rust 而言,包是一个独立的可编译单元,它编译后会生成一个可执行文件或者一个库。

一个包会将相关联的功能打包在一起,使得该功能可以很方便的在多个项目中分享。例如标准库中没有提供但是在三方库中提供的 rand 包,它提供了随机数生成的功能,只需要将该包通过 use rand; 引入到当前项目的作用域中,就可以在项目中使用 rand 的功能:rand::XXX

同一个包中不能有同名的类型,但是在不同包中就可以。例如,虽然 rand 包中,有一个 Rng 特征,可是依然可以在自己的项目中定义一个 Rng,前者通过 rand::Rng 访问,后者通过 Rng 访问,不会存在引用歧义。

项目 Package

由于 Package 就是一个项目,因此它包含有独立的 Cargo.toml 文件,以及因为功能性被组织在一起的一个或多个包。一个 Package 只能包含一个库(library)类型的包,但是可以包含多个二进制可执行类型的包。

库 Package

1
2
3
4
5
6
7
$ cargo new my-lib --lib
Created library `my-lib` package
$ ls my-lib
Cargo.toml
src
$ ls my-lib/src
lib.rs

如果你试图运行 my-lib,会报错:

1
2
$ cargo run
error: a bin target must be available for `cargo run`

原因是库类型的 Package 只能作为三方库被其它项目引用,而不能独立运行,只有之前的二进制 Package 才可以运行。

src/main.rs 一样,Cargo 知道,如果一个 Package 包含有 src/lib.rs,意味它包含有一个库类型的同名包 my-lib,该包的根文件是 src/lib.rs

模块

  • 使用 mod 关键字来创建新模块,后面紧跟着模块名称
  • 模块可以嵌套,这里嵌套的原因是招待客人和服务都发生在前厅,因此我们的代码模拟了真实场景
  • 模块中可以定义各种 Rust 类型,例如函数、结构体、枚举、特征等
  • 所有模块均定义在同一个文件中

模块树为模块的嵌套结构,他们都有一个根模块 crate。如果模块 A 包含模块 B,那么 AB 的父模块,BA 的子模块。

想要调用一个函数,就需要知道它的路径,在 Rust 中,这种路径有两种形式:

  • 绝对路径,从包根开始,路径名以包名或者 crate 作为开头
  • 相对路径,从当前模块开始,以 selfsuper 或当前模块的标识符作为开头

Rust 出于安全的考虑,默认情况下,所有的类型都是私有化的,包括函数、方法、结构体、枚举、常量,是的,就连模块本身也是私有化的。父模块完全无法访问子模块中的私有项,但是子模块却可以访问父模块、父父..模块的私有项

模块可见性不代表模块内部项的可见性,模块的可见性仅仅是允许其它模块去引用它,但是想要引用它内部的项,还得继续将对应的项标记为 pub

当外部的模块项 A 被引入到当前模块中时,它的可见性自动被设置为私有的,如果你希望允许其它外部代码引用我们的模块项 A,那么可以对它进行再导出。使用 pub use 即可实现。

引入第三方包中的模块,关于如何引入外部依赖:

修改 Cargo.toml 文件,在 [dependencies] 区域添加一行:rand = "0.8.3"

3.21-3.24 rustlings

关于rustlings,系统学过rust后几乎没有难题,所以我在6号一天就完成了70多道,前两天在熟悉github classroom和rustlings的自检方式。