/// Physical address for pflash#1 const PFLASH_START: usize = 0x2200_0000;
#[cfg_attr(feature = "axstd", no_mangle)] fnmain() { // Makesure that we can access pflash region. let va = phys_to_virt(PFLASH_START.into()).as_usize(); let ptr = va as *constu32; unsafe { println!("Try to access dev region [{:#X}], got {:#X}", va, *ptr); let magic = mem::transmute::<u32, [u8; 4]>(*ptr); println!("Got pflash magic: {}", str::from_utf8(&magic).unwrap()); } }
PFlash is the simulation of flash memory of qemu. When qemu boot, it will automatically load file to fixed MMIO, and can be directly accessed.
Paging: feature = ["paging"] is the way to evoke virtual memory management tu support MMIO. Located in axruntime.
The workflow would be:
qemu fdt: from 0x0c00_0000 to 0x3000_0000. Construct the space of device.
SBI: from 0x8000_0000 to 0x8020_0000. RISC-V Supervisor Binary Interface, it construct a interface for programming language to manipulate device level things.
Kernel Image: from 0x8020_0000. _skernel contains S-level things like static data, code etc… _ekernel is user thing.
Each entry of page table will map 1G(0x4000_0000) memory. From 0x8000_0000 to 0xc0000_0000 at pgd_idx = 2 to 0xffff_ffc0_8000_0000 to 0xffff_ffc0_c000_0000 at pgd_idx = 102. This will map to a bigger range.
Task
Example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
let worker = thread::spawn(move || { println!("Spawned-thread ...");
// Makesure that we can access pflash region. let va = phys_to_virt(PFLASH_START.into()).as_usize(); let ptr = va as *constu32; let magic = unsafe { mem::transmute::<u32, [u8; 4]>(*ptr) }; ifletOk(s) = str::from_utf8(&magic) { println!("Got pflash magic: {s}"); 0 } else { -1 } });
Each task will be in concurrency and dispatched by strategy. If it’s blocked, it will be moved to wait_queue to wait. If it’s ready, it will be moved to run_queue which is scheduler to be dispatched.
let q1 = Arc::new(SpinNoIrq::new(VecDeque::new())); let q2 = q1.clone();
let worker1 = thread::spawn(move || { println!("worker1 ..."); for i in0..=LOOP_NUM { println!("worker1 [{i}]"); q1.lock().push_back(i); // NOTE: If worker1 doesn't yield, others have // no chance to run until it exits! thread::yield_now(); } println!("worker1 ok!"); });
let worker2 = thread::spawn(move || { println!("worker2 ..."); loop { ifletSome(num) = q2.lock().pop_front() { println!("worker2 [{num}]"); if num == LOOP_NUM { break; } } else { println!("worker2: nothing to do!"); // TODO: it should sleep and wait for notify! thread::yield_now(); } } println!("worker2 ok!"); });
Cooperative Scheduling: Each tasks kindly yield themselves or exit otherwise it will block everyone because the power of CPU occupation is ownned by each tasks.
Preemptive Scheduling: Each tasks will be automatically suspended by external condition: No lock, no device access; inner condition: run out of current time slice. We can use a disable_count to record this, even for multiple condition restriction, we can sum them up.
// Enable IRQs before starting app axhal::arch::enable_irqs()
on_timer_tick will be trigger in time slice. When time ticker ticks, run_queue will check and suspend task if possible.
We can make it more dynamic. Which means each task has priority and during the implementation of cpu, each task has a vruntime to be dynamically adjusted by init_vruntime + (delta/weight(nice)) where delta and nice are dynamic adjustment number. delta will be incremented by timer, weight(nice) is actually the priority of the task. We ensure that task with lowest vruntime will be placed at top.
Based on experiment, we will construct kernel in increment by demand.
UniKernel: Single S-Level, App is within kernel.
Each kernel instance can be considered as a construction based on unikernel.
MacroKernel: Manage U-Level with support on multiple apps, process management etc…
Hypervisor: Virtual state with restricted communication between U-level and S-level.
Aceros Design
1 2 3
graph TD App <--> Runtime Runtime <--> HAL
The design of Aceros is simple, first HAL(axhal) is the abstraction of hardware to initiation trap, stack, MMU, registers based on various architectures. Then Runtime(ax*) will be classified as many components to support various environments, like net, task, fs etc…
Each arrow is reversible, in boot, it will be from bottom to top to initiate App. Then when App call something, it will be from top to bottom to evoke functionality.
In real situation, we choose thing based on features.
In common, devices can be separated to FS, Net, Dispaly.
1 2 3 4 5 6 7 8 9 10 11 12 13
/// A structure that contains all device drivers, organized by their category. #[derive(Default)] pubstructAllDevices { /// All network device drivers. #[cfg(feature = "net")] pub net: AxDeviceContainer<AxNetDevice>, /// All block device drivers. #[cfg(feature = "block")] pub block: AxDeviceContainer<AxBlockDevice>, /// All graphics device drivers. #[cfg(feature = "display")] pub display: AxDeviceContainer<AxDisplayDevice>, }
Devices will be initiated in axruntime, where axdriver module will be loaded to seek each device and mount drivers.
In qemu, virtio-mmio will send request to probe driver response otherwise return 0 as non-driver.
Block Driver
Block driver provide interface to write and read block providing IO operations and perennial storage.
Aceros use module axfs, with definition of interface vfs, and concrete implementation of ramfs and devfs.
Monolith
In U-Level, we will separate kernel memory and user memory, allowing user context used for process.
The basic logic would be construct new user space,load file to it and initiate user stack, then spawn user task with app_entry.
The top of page root would be shared as kernel space, and below would be independent as user space.
In user space separation, many kinds of resources can’t be shared as global resources, rather the demand of TaskExt as a reference to those independent resources owned by each user apps.
In TaskInner, we store the ptr of TaskExt by macro declaration of such type.
/// Task extended data for the monolithic kernel. pubstructTaskExt { /// The process ID. pub proc_id: usize, /// The user space context. pub uctx: UspaceContext, /// The virtual memory address space. pub aspace: Arc<Mutex<AddrSpace>>, }
// It will expanded as a trait implmentation of reference to ptr as the `TaskExt` type. def_task_ext!(TaskExt)
A physical computer system can build multiple virtual computer system with its own virtual resources. Just like apps in U-level, each virtual system will consider themselves uniquely occupies these resources.
Emulator like a interpretor to stimulate a virtual system while in loose demand of efficiency.
Hypervisor will execute most instructions directly as a isomorphism of the stimulated virtual system to gain a huge efficiency.
*I type: Each virtual OS is equal on hardware. *II type: Virtual OS is on host OS.
Each instance as Guest(OS Image) be loaded on our host os kernel.
Design
Only focus on hypervisor(I type).
Levels are extended, because we need to separate host and guest, so U-Level become U, VU-Level. So does the kernel level because we need to separate host, the hypervisor and guest, the virtual OS. So S-Level become HS, VS-Level.
Communication
Instructions will be implemented in communication of HS and VS, when there’s a sbi-call, VS will communicate HS to implement.
In hstatus of RISC-V, design the virtualization mode:
SPV: the source of HS or VS, which determines the sret to VU or U. SPVP: the permission of modification of memory that HS to V.
We need to store guest context and host context, then switch between ret(VM-Exit) and sret. We implement this by run_guest and guest_exit which both is the other’s reverse.
Timer will be injected to sbi-call by setting a virtual clock in VS, when set timer, we clear timer of guest and set timer of host; when interrupt, we set clear timer of host and set timer of guest waiting for next request of timer.
Memory will be separated based on guest and host too. GVA will be a map of GPA as guest memory. However, HPA take responsibility of handle GPA as the virtualization process.
Dev will record each start vaddr and when VM-Exit of PageFault, it will findvmdevs.find(addr) and call handle_mmio for corresponding request.
我注意到range for i in 1..=5这样的方式非常有意思和方便,还可以用字母。Rust 拥有相当多的数值类型. 需要熟悉这些类型所占用的字节数,这样就知道该类型允许的大小范围以及选择的类型是否能表达负数。类型转换必须是显式的. Rust 永远也不会偷偷把你的 16bit 整数转换成 32bit 整数。
一个生命周期标注,它自身并不具有什么意义,因为生命周期的作用就是告诉编译器多个引用之间的关系。例如,有一个函数,它的第一个参数 first 是一个指向 i32 类型的引用,具有生命周期 'a,该函数还有另一个参数 second,它也是指向 i32 类型的引用,并且同样具有生命周期 'a。此处生命周期标注仅仅说明,这两个参数 first 和 second 至少活得和’a 一样久,至于到底活多久或者哪个活得更久,我们都无法得知:
1
fnuseless<'a>(first: &'ai32, second: &'ai32) {}
函数签名中的生命周期标注
1 2 3 4 5 6 7
fnlongest<'a>(x: &'astr, y: &'astr) -> &'astr { if x.len() > y.len() { x } else { y } }
和泛型一样,使用生命周期参数,需要先声明 <'a>
x、y 和返回值至少活得和 'a 一样久(因为返回值要么是 x,要么是 y)
该函数签名表明对于某些生命周期 'a,函数的两个参数都至少跟 'a 活得一样久,同时函数的返回引用也至少跟 'a 活得一样久。实际上,这意味着返回值的生命周期与参数生命周期中的较小值一致:虽然两个参数的生命周期都是标注了 'a,但是实际上这两个参数的真实生命周期可能是不一样的(生命周期 'a 不代表生命周期等于 'a,而是大于等于 'a)。在通过函数签名指定生命周期参数时,并没有改变传入引用或者返回引用的真实生命周期,而是告诉编译器当不满足此约束条件时,就拒绝编译通过。
因此 longest 函数并不知道 x 和 y 具体会活多久,只要知道它们的作用域至少能持续 'a 这么长就行。
该例子证明了 result 的生命周期必须等于两个参数中生命周期较小的那个:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
fnmain() { let string1 = String::from("long string is long"); let result; { let string2 = String::from("xyz"); result = longest(string1.as_str(), string2.as_str()); } println!("The longest string is {}", result); } error[E0597]: `string2` does not live long enough --> src/main.rs:6:44 | 6 | result = longest(string1.as_str(), string2.as_str()); | ^^^^^^^ borrowed value does not live long enough 7 | }
在上述代码中,result 必须要活到 println!处,因为 result 的生命周期是 'a,因此 'a 必须持续到 println!。