Reading time ( words)
Computers require an increasing amount of processing power to ensure that demanding programs run smoothly. Current technology will not able to keep up for long, and a new concept is needed in the long term: Together with their partners in the SFB/Transregio 89 collaborative project “Invasive Computing”, computer scientists at Friederich-Alexander-Universität Erlangen-Nürnberg (FAU) are currently developing a method to distribute processing power to programmes based on their needs which will enable computers to cope with future processing requirements.
This is how processing power is allocated to individual programs: The Network Operation Centre (iNoC) connects tiles on a heterogeneous tile-based Multiprocessor System-on-chip (MPSoC) architecture. (Picture: SFB/Transregio 89, Dr.-Ing. Jan Heißwolf)
Everyone will be familiar with playing a video on their computer that keeps pausing every few seconds and just won’t buffer properly. These stops and starts are due to the operating system architecture and other applications running in the background. In today’s multi-core processors, operating systems distribute processing time and resources (e.g. memory) to applications without accurate information regarding actual requirements. That is to say, processors run multiple tasks at once, and that in turn means there is competition for shared resources. This can cause unpredictable delays and frequent short interruptions, as is the case with jerky videos. As processing power requirements increase, multi-core process technology is reaching its limits. While it may be feasible to keep integrating more and more cores, even up to several hundred, this is inefficient, since it increases competition while slowing down processing speed overall.
Knowing what the application needs would enable a better distribution of resources.
Transregio 89 is a collaborative research project in which FAU researchers under the leadership of Prof. Dr. Jürgen Teich are searching for solutions to the problem. The approach: Operating systems should not distribute resources such as processing power to programs solely based on their own strategies. Instead, programs should be able to provide a framework for the use of resources. The programs are analysed in advance, and the performance requirements thus determined are shared with the operating system, which in turn ensures that resources are properly allocated. A video could thus for instance request four cores, which would then be reserved for playback during its run-time. “This new system architecture helps prevent the operating system from making wrong decisions and guarantees the necessary processing power,” says Prof. Dr Wolfgang Schröder-Preikschat from the Chair of Distributed Systems and Operating Systems at FAU.
IT security: old risks in new guise
The new approach also raises new challenges with regard to IT security. Indeed, when programs can claim resources unhindered, it becomes easy for malware to paralyse a given system as it can monopolise all the resources for itself, and delete or overwrite the memory of other programs in a scenario that could be compared with “Core Wars”, a computer game in which programs compete for the memory of a simple computer. The program which succeeds in wiping out the other program through excessive resource usage wins. To avoid this, IT security experts from FAU and KU Leuven are currently working on a SFB 89 sub-project to develop appropriate countermeasures providing enhanced security mechanisms built into the processor hardware. “We ensure the confidentiality of code and data through all storage levels, even if a program uses more resources than required or when it reads from the memory of other programs,” adds Prof. Dr. Felix Freiling from the IT Security group at FAU.
The computer scientists are confident that their approach has great potential and will give computers the ability to safely provide the necessary processing speed in the future.