Q: 我可以在我的開發板PREEMPT_RT直接在內核環境中執行POSIX應用, 使用Xenomai3 這是什么原因它?
A:假設你的應用程序已經完全是POSIX,而且性能也滿足,則,而且也沒有理由去使用它Xenomai3.但是,你可能會考慮基於以下兩點Xenomai3.
- 你想移植遺留的嵌入式應用到Linux上而不用改動API. 比方說,你不想在POSIX接口上重寫API, 這時候Xenomai就能發揮它的用場.由於它通過一個共通的實時性來支持多個編程接口,包含傳統的RTOS API, Xenomai3也將在基於PREEMPT_RT的系統上支持這些API.
- 你的目標板的性能不足,或/而且你想要你的實時任務給系統追加最小的負載. 這就是雙內核機制優於原生的搶占系統的一個地方. 后者的情況下,全部的Linux系統必須運行內部代碼(如優先級繼承,中斷線程化)來防止實時處理被延遲, 可是在雙內核系統里, 並沒有這樣的必要,由於實時內核是差別於Linux內核獨立運行的,故而,通常的Linux動作並不會對實時動作有影響,它甚至都不須要知道實時內核.
Q: I can run POSIX based applications directly over a PREEMPT_RT kernel on my target system, so what is the point of running Xenomai 3 there?
A: If your application is already fully POSIXish, and the performances requirements are met, then there is likely no point. However, you may want to consider Xenomai 3 in two other situations:
- you want to port a legacy embedded application to Linux without having to switch APIs, i.e. you don't want to rewrite it on top of the POSIX interface. Xenomai may help in this case, since it supports multiple programming interfaces over a common real-time layer, including emulators of traditional RTOS APIs. Xenomai 3 will make those APIs available to a PREEMPT_RT based system as well.
- the target hardware platform has limited horsepower, and/or you want the real-time job to put the smallest possible overhead on your system. This is where dual kernels are usually better than a native preemption system. With the latter, all parts of the Linux system have to run internal code that prevents real-time activities from being delayed in an unacceptable manner (e.g. priority inheritance mechanism, threaded IRQ handlers). In a dual kernel system, there is no need for this, since the real-time co-kernel runs separately from the normal Linux kernel. Therefore, regular Linux activity is not charged for real-time activity, it does not even have to know about it.
In short, there cannot be any pre-canned answer to such a question: it really depends on your performance requirements, and your target hardware capabilities. This has to be evaluated on a case-by-case basis. Telling the world about "we can achieve X microseconds worst-case latency" without specifying the characteristics of the target platform would make no sense.