After the demand of code execution one by one, we want to introduce Process which will be a full space-time description of execution process of binary file in OS. It means for one process, it should hold independent resources to be executed.
After Process, Thread and Coroutine are also developed in growth of OS. They are different in resources taken up, usually, Thread will be in one process and hold their own independent stack and workflow; Coroutine will be in one Thread and hold their own independent workflow.
Design
Every process need independent memory layout, can be dispatch by cpu. It’s the functionality based on Task, after that, each process can Fork their own children processes, so there’s a workflow in time discrepancy. Its resource can’t be recycled in time due to children processes, we need to mark it as Zombie Process.
To clarify which is children, which is parent, and each isolated process, we mark each with PID-Process Identifier. Notice if we fork a process, it will be same as parent only a0 which is the register called for return will be different, parent process return new PID of child process, child process return 0 as none of fork.
fork: copy a process state(like sp etc…) as its child process.
waitpid: wait a child become zombie and recycle all resources.
exec: clear a process state and load a execution file.
Data Constructon
We will recycle all presumed pid by PidAllocator, No need to worry about previous pid used.
And cpu dispatch for newly introduced Processor. We introduce a idle process that used to call other process.
Why not direct call next by previous one? rather use idle process?
Separate idle process for start and others for its own, then dispatch data won’t occur in other process and make the dispatch process invisible for Trap for each process.
// run_tasks() // loop to fetch task and switch possible task ifletSome(task) = fetch_task() { let idle_task_cx_ptr = processor.get_idle_task_cx_ptr(); letmut task_inner = task.inner_exclusive_access(); let next_task_cx_ptr = &task_inner.task_cx as *const TaskContext; task_inner.task_status = TaskStatus::Running; drop(task_inner); processor.current = Some(task); drop(processor); unsafe { __switch( idle_task_cx_ptr, next_task_cx_ptr, ); } }
// switch to idle process if one task run out of its time. pubfnschedule(switched_task_cx_ptr: *mut TaskContext) { letmut processor = PROCESSOR.exclusive_access(); let idle_task_cx_ptr = processor.get_idle_task_cx_ptr(); drop(processor); unsafe { __switch( switched_task_cx_ptr, idle_task_cx_ptr, ); } }
Dispatch Construction
Previously, we use suspend_current_and_run_next to pause task and switch to next, now we need to adapt it to process design.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
// os/src/task/mod.rs
pubfnsuspend_current_and_run_next() { let task = take_current_task().unwrap();
// ---- access current TCB exclusively letmut task_inner = task.inner_exclusive_access(); let task_cx_ptr = &mut task_inner.task_cx as *mut TaskContext; task_inner.task_status = TaskStatus::Ready; drop(task_inner); // ---- stop exclusively accessing current PCB
// add to task deque add_task(task); // change current to idle process schedule(task_cx_ptr); }
In previous case, task won’t be created by its parent task, but process will. So, if its TrapContext has been recycled, we need to refactor our trap_handler for such case.
1 2 3 4 5 6 7 8 9 10 11
// fn trap_handler() -> ! Trap::Exception(Exception::UserEnvCall) => { // jump to next instruction anyway letmut cx = current_trap_cx(); cx.sepc += 4; // syscall may create new process and change trap context. let result = syscall(cx.x[17], [cx.x[10], cx.x[11], cx.x[12]]); // wether cx is changed or not, we will refetch it. cx = current_trap_cx(); cx.x[10] = result asusize; }
Now we will construct fork, exec, waitpid,exit.
Fork
We need to copy all memory layout and its task state. Then reallocate new kernel stack for it.
// impl MemorySet pubfnfrom_existed_user(user_space: &MemorySet) -> MemorySet { letmut memory_set = Self::new_bare(); // map trampoline memory_set.map_trampoline(); // copy data sections/trap_context/user_stack for area in user_space.areas.iter() { let new_area = MapArea::from_another(area); memory_set.push(new_area, None); // copy data from another space for vpn in area.vpn_range { let src_ppn = user_space.translate(vpn).unwrap().ppn(); let dst_ppn = memory_set.translate(vpn).unwrap().ppn(); dst_ppn.get_bytes_array().copy_from_slice(src_ppn.get_bytes_array()); } } memory_set }
// impl TaskControlBlock // fn fork let trap_cx_ppn = memory_set .translate(VirtAddr::from(TRAP_CONTEXT).into()) .unwrap() .ppn(); // alloc a pid and a kernel stack in kernel space let pid_handle = pid_alloc(); let kernel_stack = KernelStack::new(&pid_handle); let kernel_stack_top = kernel_stack.get_top(); let task_control_block = Arc::new(TaskControlBlock { pid: pid_handle, kernel_stack, inner: unsafe { UPSafeCell::new(TaskControlBlockInner { trap_cx_ppn, base_size: parent_inner.base_size, task_cx: TaskContext::goto_trap_return(kernel_stack_top), task_status: TaskStatus::Ready, memory_set, parent: Some(Arc::downgrade(self)), children: Vec::new(), exit_code: 0, })}, }); // add child parent_inner.children.push(task_control_block.clone()); // modify kernel_sp in trap_cx // **** access children PCB exclusively let trap_cx = task_control_block.inner_exclusive_access().get_trap_cx(); trap_cx.kernel_sp = kernel_stack_top;
Finally, implement sys_fork
1 2 3 4 5 6 7 8 9 10 11 12
pubfnsys_fork() -> isize { let current_task = current_task().unwrap(); let new_task = current_task.fork(); let new_pid = new_task.pid.0; let trap_cx = new_task.inner_exclusive_access().get_trap_cx();
// for child process, fork returns 0 trap_cx.x[10] = 0; //x[10] is a0 reg
add_task(new_task); new_pid asisize }
We can see that if trap_handler call sys_fork, parent process x[10] would be new_pid as return value.
Exec
If we want to execute a task by its name, we need to first load string in app load.
1 2 3 4 5 6 7 8
// os/build.rs
writeln!(f, r#" .global _app_names _app_names:"#)?; for app in apps.iter() { writeln!(f, r#" .string "{}""#, app)?; }
// move all its children to the initial process // ++++++ access initproc TCB exclusively { letmut initproc_inner = INITPROC.inner_exclusive_access(); for child in inner.children.iter() { child.inner_exclusive_access().parent = Some(Arc::downgrade(&INITPROC)); initproc_inner.children.push(child.clone()); } } // ++++++ stop exclusively accessing parent PCB
// clear all memory. inner.children.clear(); inner.memory_set.recycle_data_pages(); drop(inner); // **** stop exclusively accessing current PCB // drop task manually to maintain rc correctly drop(task); // use _unused replace original context, which will be recycled by rust. letmut _unused = TaskContext::zero_init(); schedule(&mut _unused as *mut _);
// os/src/syscall/process.rs
pubfnsys_exit(exit_code: i32) -> ! { exit_current_and_run_next(exit_code); panic!("Unreachable in sys_exit!"); }
WaitPid
WaitPid will return -1 if there’s no specified pid process exist, if it’s running, return -2, finally, if it finished, recycle it and return 0.
let pair = // search task managers and find (idx,task_block) p.inner_exclusive_access().is_zombie() && (pid == -1 || pid asusize == p.getpid())
ifletSome((idx,_)) = pair { let child = inner.children.remove(idx); // confirm that child will be deallocated after removing from children list assert_eq!(Arc::strong_count(&child), 1); let found_pid = child.getpid(); // ++++ temporarily access child TCB exclusively let exit_code = child.inner_exclusive_access().exit_code; // ++++ stop exclusively accessing child PCB *translated_refmut(inner.memory_set.token(), exit_code_ptr) = exit_code; found_pid asisize } else { // pid process is running -2 }
// user/src/lib.rs
pubfnwait(exit_code: &muti32) -> isize { loop { match sys_waitpid(-1, exit_code as *mut _) { -2 => { yield_(); } // -1 or a real pid exit_pid => return exit_pid, } } }