Hardware and software setup

Methods for implementing applied software. Types of application programs

In this article, I would like to talk about what application programs are, as well as what application tasks can be solved with their help (for example, an example of a simple database), and what role they play for the end user of a personal computer. First of all, I would like to note that computers can process any data that the user sends to it. But in order for this data to be recognized and understood by the machine correctly, it is required to compose special program in a language he understands, or, as it is easier to say, a series of sequential instructions for performing certain actions.

Kinds application programs

Application programs are such programs, the purpose of which is aimed at solving certain problems and directly interact with the user. Computer programs are needed to automate any processes, data storage and processing, modeling, design, etc. complex computing processes. Programs are usually divided into two classes: system programs and application programs. The former are mainly used to process incoming information from some equipment: network card, video card, connected equipment, i.e. these are the programs that interact with hardware or external devices. We will talk about them in the following articles. But about the second - application programs, let's talk in more detail.

Application programs are designed to interact with the end user, i.e. the user, as it were, interacts with himself, but only through the program, enters any data at the input and receives a certain result of the processed data at the output. This is a kind of solution applied task, for example, this is scanning images and their subsequent processing or searching for the right files. The use of application programs can be observed in almost all areas of human activity, whether it is accounting in an enterprise or the creation of graphic images, drawing, etc. Also, the use of application programs is present in such very important systems as database management systems. This is very important in large enterprises where a large number of users and who really need to store and use large amounts of information.

Types and examples of application programs

Application programs are:

  • Text editors. Designed for creating and editing text without formatting;
  • Word processors (MS Word). More advanced text editors, allowing you to edit text with design, change fonts and sizes, insert graphic files, tables, etc. for more presentable text design;
  • Spreadsheets (MS Excell). They are mainly used to process any data contained in these tables. Applied tasks most often performed to store credentials with their subsequent analysis;
  • Raster and vector graphic editors (Photoshop, Corel), "viewers". The use of application programs of this type allows you to create, edit, and also view graphic images;
  • Audio video players, editors (WinAmp). Allows you to view videos, listen to music, create musical compositions;
  • Database management systems (for example - MSQL). Such programs are used to work with databases. For example, a customer accounting program is a simple database for storing information about customers, their contact information, etc. You can carry out operations to search, delete and add records to the database;
  • Translators or electronic dictionaries. Such application programs allow you to translate text into different languages ​​without much effort. foreign languages without their direct study;
  • Computer games. Used for entertainment or for development in a playful way.

One example of an application program is, for example, a program for counting reposts. It is difficult to list all types of application programs, but we have tried to highlight the main application software programs.

Creation of a complete application environment that is fully compatible with the environment of another operating system, is a fairly complex task, closely related to the structure of the operating system. Exists various options building multiple application environments, differing both in features of architectural solutions and in functionality, providing a different degree of application portability.

In many versions of the UNIX operating system, the application environment translator is implemented as a regular application. In operating systems built using the microkernel concept, such as Windows NT, application environments run as user-mode servers. And in OS/2, with its simpler architecture, application environments are built deep into the operating system.

One of the most obvious options for implementing multiple application environments is based on the standard layered structure of the OS. On fig. 3. 8 operating system OS1 supports, in addition to its "native" applications, applications of the OS2 operating system. For this, it contains a special application - applied software environment, which translates the interface of the "foreign" operating system - API OS2 into the interface of its "native" operating system - API OS1.

Rice. 3. 8. Application software environment that broadcasts
system calls

In another implementation of multiple application environments, the operating system has multiple peer application programming interfaces. In the fig. 3. In the example, the operating system supports applications written for OS1, OS2, and OS3. To do this, the application programming interfaces of all these operating systems are placed directly in the kernel space of the system: API OS1, API OS2 and API OS3.

Rice. 3. 9. Implementation of compatibility based on several
peer APIs

In this variant, the API level functions call the functions of the underlying OS level, which must support all three generally incompatible application environments. Different operating systems manage differently system time, different time-of-day formats are used, processor time is shared based on proprietary algorithms, etc. The functions of each API are implemented by the kernel taking into account the specifics of the corresponding OS, even if they have a similar purpose.

Another way to build multiple application environments is based on the microkernel approach. At the same time, it is very important to separate the basic, common to all application environments, mechanisms of the operating system from the high-level functions specific to each of the application environments that solve strategic problems.

In accordance with the microkernel architecture, all OS functions are implemented by the microkernel and user-mode servers. It is important that each application environment is designed as a separate user-mode server and does not include basic mechanisms (Fig. 3. 10). Applications, using the API, make system calls to the corresponding application environment through the microkernel. The application environment processes the request, executes it (perhaps by asking for help from the basic functions of the microkernel for this), and sends the result back to the application. During the execution of the request, the application environment, in turn, has to access the underlying OS mechanisms implemented by the microkernel and other OS servers.

Rice. 3. 10. Microkernel approach to the implementation of multiple
application environments

This approach to designing multiple application environments has all the advantages and disadvantages of a microkernel architecture, in particular:

It is very easy to add and exclude application environments, which is a consequence of the good extensibility of microkernel operating systems;

Reliability and stability are expressed in the fact that if one of the application environments fails, all the others remain operational;

· low performance of microkernel operating systems affects the speed of application environments, and hence the speed of application execution.

Creating several application environments within one operating system to run applications of different operating systems is a way that allows you to have a single version of the program and transfer it between operating systems. Multiple application environments ensure binary compatibility of a given OS with applications written for other operating systems. As a result, users have more freedom to choose operating systems and easier access to quality software.

Questions for self-examination

  1. What is meant by OS architecture?
  2. What are the three main layers usually distinguished in the structure computing system?
  3. What is the role assigned by the OS to the interface system calls?
  4. What conditions must be met when designing an OS in order for the OS to be easily portable?
  5. What is the difference between microkernel architecture and traditional OS architecture?
  6. Why is the microkernel well suited to support distributed computing?
  7. What is meant by the concept of multiple application environments?
  8. What is the essence of the library translation method?

End of work -

This topic belongs to:

Operating system, processes, hardware

The operating system of the OS to the greatest extent determines the appearance of the entire computing system as a whole, the OS performs two essentially little related .. os as a virtual extended machine the use of most computers .. from the user's point of view, the function of the os is to provide the user with some extended or virtual ..

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material turned out to be useful for you, you can save it to your page on social networks:

All topics in this section:

Features of hardware platforms
The properties of the operating system are directly affected by the hardware to which it is oriented. By type of equipment, operating systems of personal computers are distinguished, mi

Tasks and exercises
1. What events in the development of the technical base of computers became milestones in the history of operating systems? 2. What was the fundamental difference between the first batch processing monitors and

Operating system architecture
Any well-organized complex system has a clear and rational structure, that is, it is divided into parts - modules that have a completely finished functional purpose with

Main memory management
Memory is a large array of words or bytes, each with its own address. This is a data store that is quickly accessed, distributed between the processor and

External memory management
Since the main memory (primary memory) is volatile and too small to hold all the data and programs permanently, the VS must provide secondary memory to save the main memory. Bol

File management subsystem
A file is a collection of related information defined at creation. In addition to the actual data, files represent programs, both in source and in object form. Subsystem

Networking
distributed system- a set of processors that do not allocate memory or each processor has its own local memory. The processors in the system are connected by computer network and providing

Kernel and auxiliary OS modules
The most general approach to the structuring of the operating system is the division of all its modules into two groups: kernel - OS modules that perform basic functions;

Kernel and privileged mode
To reliably control the execution of applications, the operating system must have certain privileges over applications. Otherwise, an incorrectly working application can

Layered OS structure
A computing system running under a kernel-based OS can be viewed as a system consisting of three hierarchically arranged layers: the bottom layer forms the hardware

Core structure
OS hardware support. Until now, the operating system has been spoken of as a set of programs, but some of the functions of the OS can also be performed by hardware. Poet

Hardware dependency and OS portability
Many operating systems successfully run on various hardware platforms without significant changes in their composition. This is largely due to the fact that, despite the differences

Operating system portability
If operating system code can be ported relatively easily from one type of processor to another type of processor, and from one type of hardware platform to a hardware platform

Concept
Microkernel architecture is an alternative to the classical way of building an operating system. Under classical architecture in this case, we understand the above structural organization

Binary and Source Compatibility
It is necessary to distinguish between compatibility at the binary level and compatibility at the source code level. Applications are usually stored in the OS in the form executable files, containing binary images

Translation of libraries
The way out in such cases is to use the so-called application software environments. One of the components that form the application software environment is a set of func

Process Concept
A process is an activity that is running on a processor. In the broadest sense, a processor is any device in a computer that

Resource concept
One of the functions of the OS is to provide an efficient and conflict-free way to manage the resources of a computing system. A resource is often understood as an indicator

The concept of virtualization
Virtualization of this or that resource is carried out within the framework of a centralized resource allocation scheme. By virtualization, two forms of user deception are implemented:

Single order service disciplines
a) FIFO (First In -- First Out) - service discipline in the order of receipt. All requests go to the end of the queue. The applications at the head of the queue are served first. Schematic

Interrupt system
A situation that occurs as a result of the impact of some independent event, leading to a temporary cessation of the execution of a sequence of commands of one program with the purpose

Process Concept
A process (task) is a program that is in execution mode. Each process has its own address space associated with it, from which it can read and

Process Model
In a multitasking system, the real processor switches from process to process, but to simplify the model, a set of processes running in parallel (pseudo-parallel) is considered. Consider

Terminating a process
(call exit or ExitProcess): Scheduled termination (end of execution) Scheduled exit on a known error (e.g. missing file)

Process Hierarchy
UNIX systems have a rigid process hierarchy. Each new process created by the fork system call is a child of the previous process. The child process gets from the parent process

Process Status
The three states of a process are: Running (occupying the processor) Ready (the process is temporarily suspended to allow another process to run) Waiting (the process

The concept of flow
Each process has an address space and a single stream of executable instructions. In multiuser systems, with each call to the same service, the arrival

flow model
Each thread is associated with: Command execution counter Registers for current variables Stack State Threads share elements among themselves

Benefits of Using Threads
Simplifying the program in some cases by using a common address space. The speed of creating a thread, compared with a process, is about 100 times. Promoted

Implementation of threads in user space, kernel and mixed
A - threads in user space B

Windows Implementation Features
Four concepts are used: Job - a set of processes with common quotas and limits Process - a container of resources (memory...), contains at least one thread. Poto

Communication between processes
Situations when processes have to interact: Transfer of information from one process to another Control over the activities of processes (for example: when they compete for

Passing information from one process to another
The transfer can be done in several ways: Shared memory Channels (pipes) are a pseudo-file to which one process writes and another reads.

Race condition
A race condition is a situation where multiple processes read or write data (to memory or a file) at the same time. Consider an example where two processes try to

Critical areas
A critical region is a part of a program that accesses shared data. Conditions for race avoidance and efficient running of processes: Two processes do not

Lock variables
The concept of a lock variable is introduced, i.e. if the value of this variable is equal to, for example, 1, then the resource is occupied by another process, and the second process goes into standby mode (blocks) until

Strict alternation
In this model, processes can be executed strictly in turn using a variable.

Process interaction primitives
The concept of two primitives is introduced. sleep is a system request that causes the calling process to block until it is started by another process. wak

semaphores
Semaphores are variables for counting trigger signals stored for the future. Two operations down and up were proposed (analogues of sleep and wake

Scheduling in batch processing systems
6.2.1 First In Fist Out (FIFO) Processes are queued as they arrive. Advantages:

Cyclic planning
The simplest scheduling algorithm and frequently used. Each process is given a CPU time slice. When the quantum ends, the process is transferred by the scheduler to the end of the

Priority Planning
Each process is assigned a priority, and control is transferred to the process with the highest priority. Priority can be dynamic or static. Dynamics

Scheduling in real-time systems
Real-time systems are divided into: rigid (strict deadlines for each task) - motion control flexible (violation of the time schedule is not desirable, but acceptable) -

General Real Time Scheduling
A model is used when each process fights for the processor with its own task and schedule for its execution. The planner needs to know: the frequency with which each

Process Deadlock
Process deadlock can occur when multiple processes compete for the same resource. Resources can be paged and nonpaged, hardware and software.

Modeling Deadlocks
Modeling dead ends with graphs. Symbols On such a model, very

Deadlock detection and elimination
The system does not try to prevent the deadlock, but tries to detect it and resolve it. Deadlock detection when there is one resource of each type

Dynamic deadlock avoidance
In this way, the OS needs to know if the resource grant is safe or not. Resource Trajectories Consider a model of two processes and two resources

Avoiding the Four Conditions Required for Deadlocks
Mutual Exclusion Condition Prevention You can minimize the number of processes competing for resources. For example, using printer spooling when t

I/O Hardware Principles
The two lower levels of the I / O control system are hardware: the devices themselves, which directly perform operations, and their controllers, which serve to organize the joint work of devices.

Device controllers
I / O devices usually consist of two parts: mechanical (not to be understood literally) - disk, printer, monitor electronic - controller or

Memory-mapped I/O
Each controller has several registers that are used to interact with central processing unit. With the help of these registers, the OS controls (reads, writes, turns on, etc.) and determines

Direct memory access (DMA - Direct Memory Access)
Direct memory access is implemented using a DMA controller. The controller contains several registers: memory address register byte counter

Interrupts
After the I/O device has started, the processor switches to other tasks. To signal the end of the processor to the processor, the device initiates an interrupt,

I/O Software Tasks
The main tasks to be solved software I/O: Device independence - for example, a program that reads data from a file does not have to think about why

Software I/O
In this case, the CPU does all the work. Consider the process of printing the string ABCDEFGH in this way.

Interrupt-driven I/O
If in the previous example the buffer is not used, and the printer prints 100 characters per second, then each character will take 10ms, during which time the processor will idle, waiting for the print to be ready.

Interrupt handlers
Interrupts should be hidden as deep as possible in the bowels of the operating system so that as little of the OS as possible has to deal with them. It is best to block the driver that started I/O. Algo

Device drivers
Device driver - required for every device. Different OS require different drivers. Drivers must be part of the kernel (on a monolithic system) in order to access registers.

Device independent I/O software
Device-independent I/O software features: Uniform interface for device drivers, Buffering Error messages

Generalization of I/O levels and functions
Levels and basic functions of the I / O system Baz

Blocking, non-blocking, and asynchronous system calls
All system calls associated with the implementation of I / O operations can be divided into three groups according to the ways of implementing the interaction between the process and the I / O device. to the first, to

Buffering and caching
A buffer is usually understood as some area of ​​memory for storing information when data is exchanged between two devices, two processes, or a process and a device. Exchange in

Spooling and capturing devices
We talked about the concept of spooling in the first lecture of our course, as a mechanism that for the first time made it possible to combine real I / O operations of one task with the execution of another task.

Interrupt and Error Handling
If, when working with external device the computing system does not use the method of polling its state, but uses the interrupt mechanism, then when an interrupt occurs, as we already

Query Planning
When using a non-blocking system call, it may happen that the desired device is already busy performing some operations. In this case, the non-blocking call may immediately

Principles behind the UNIX I/O control subsystem
1. This subsystem is built in the same way as the data management subsystem (file system). The user is provided with a unified way to access both the PU and the files. Under file in OS

OS memory management
4.1. The concept of the organization and management of physical memory in operating systems 4.2. Methods of connected allocation of the main memory 4.2.1. Connected distributed

Understanding the organization and management of physical memory in operating systems
The organization and management of the main (primary, physical, real) memory of a computer is one of the most important factors that determine the construction of operating systems. In the English technical

Connected memory allocation for a single user
Connected memory allocation for one user, also called single continuous allocation, is used in computers operating in batch single-program mode under the control of the simplest

Connected memory allocation in multiprogram processing
With multiprogram processing, several tasks are placed in the computer's memory at once. The allocation of memory between jobs in this case can be done in the following ways:

Strategies for placing information in memory
Memory placement strategies are designed to determine where in main memory incoming programs and data should be placed in a non-relocatable memory allocation.

Basic concepts of virtual memory
The term virtual memory is usually associated with the ability to address a memory space much larger than the primary (real, physical) memory capacity of a particular computing machine.

Paging of virtual memory
A purely paging virtual address is an ordered pair (p, d), where p is the page number in virtual memory, and d is the offset within the p page. The process can run

Segment organization of virtual memory
The virtual address in the segment organization of virtual memory is an ordered pair n = (s, d) , where s is the number of the virtual memory segment, and d is the offset within this segment. The process can

Page-segment organization of virtual memory
Paging systems have the advantages of both ways of implementing virtual memory. Segments usually contain an integer number of pages, and it is not necessary that all pages

Virtual memory management strategies
Virtual memory management strategies, like physical memory management strategies, fall into three categories: push strategies, placement strategies, and pop strategies.

Push Strategies
The following strategies are used to control pushing: pushing (pumping) on ​​demand (on demand); pushing (pumping) with lead (advance).

Placement strategies
In systems with paging of virtual memory, the decision to place newly loaded pages is made quite simply: a new page can be placed in any free

Push Strategies
In multiprogramming systems, all primary memory is, as a rule, occupied. In this case, the memory manager must decide which page or which segment to remove from the primary

File naming
The length of the file name depends on the OS, it can be from 8 (MS-DOS) to 255 (Windows, LINUX) characters. OSes can distinguish between uppercase and lowercase characters. For example, WINDOWS and windows for MS-DOS are the same and t

File structure
Three basic file structures: 1. Byte Sequence - The OS doesn't care about the contents of the file, it only sees the bytes. The main advantage of such a system is the flexibility

File types
The main types of files: · Regular - contain user information. Used on Windows and UNIX. · Catalogs - system files providing

File Attributes
Main file attributes: · Security - who and how can access the file (users, groups, read/write). Used on Windows and UNIX. · Password - password to fa

Files mapped to memory address space
Sometimes it's convenient to map a file to memory (no need to use I/O system calls to work with the file), and work with memory, and then write the modified file to disk. When using

Single level directory systems
In this system, all files are contained in one directory. od

Path name
To organize a directory tree, you need some way to specify a file. The two main methods for specifying a file are: absolute pathname - specifies the path from the root

Directory Implementation
When opening a file, the path name is used to find the entry in the directory. The directory entry points to the addresses of disk blocks. Depending on the system, this may be: disk hell

Implementation of long filenames
Earlier operating systems used short filenames, MS-DOS up to 8 characters, in UNIX Version 7 up to 14 characters. Now longer filenames are used (up to 255 characters or more).

Speed ​​up file search
If the directory is very large (several thousand files), sequential reading of the directory is not very efficient. 1 Using a hash table to speed up file searches.

A - shared file
Such a file system is called a directed acyclic graph (DAG, Directed Acyclic Graph). There is a problem if the disk addresses are contained in the directory entries themselves.

Block size
If the decision is made to store the file in blocks, then the question arises about the size of these blocks. There are two extremes: Large blocks - for example, 1MB, then even a 1 byte file will take a whole block

Accounting for free blocks
There are two main ways to account for free blocks: · A linked list of disk blocks, each block contains as many free block numbers as it interferes with the block. Often for the reserve list

Disk quotas
To limit the user, there is a quota mechanism. Two types of limits: Hard - cannot be exceeded Flexible - can be exceeded, but when the user logs out

Backup
Cases for which backup is necessary: ​​Emergencies resulting in loss of data on the disk Accidental deletion or software corruption of files

File system consistency
If the system crashes before the modified block is written, the file system can enter an inconsistent state. Especially if it's an i-node block, a directory block, and

caching
Block cache (buffer cache) - a set of blocks stored in memory, but logically belonging to the disk. All read requests to the disk are intercepted, and the presence of the required

ISO 9660 file system
More detailed information- http://ru.wikipedia.org/wiki/ISO_9660 The standard was adopted in 1988. According to the standard, disks can be divided into logical partitions, but we will consider disks with

ISO 9660 catalog entry
File location - the number of the initial block, because. blocks are placed in sequence. L - length of file name in bytes File name - 8 characters, 3 characters of extension (due to compatible

Rock Ridge Extensions for UNIX
This extension was created so that the UNIX file system would be present on the CD-ROM. For this, the System use field is used. Extensions contain the following fields: 1. PX -

File system UDF (Universal Disk Format)
More information - http://ru.wikipedia.org/wiki/Universal_Disk_Format Originally created for DVD, with version 1.50 added support for CD-RW and CD-R. Now latest version

File system MS-DOS (FAT-12,16,32)
The first versions had only one directory (MS-DOS 1.0). Since MS-DOS 2.0, a hierarchical structure has been applied. Directory entries, fixed at 32 bytes. File names -

They will be enabled in Windows 98
The archive attribute is needed for backup programs, by which they determine whether to copy the file or not. The time field (16 bits) is divided into three subfields:

Windows 98 extension for FAT-32
10 free bits were used for the extension. Form

The main superstructure over FAT-32, these are long filenames
Two names were assigned to each file: 1. Short 8+3 for MS-DOS compatibility 2. Long file name, in Unicode format The file can be accessed by any

Format of entry directories with a fragment of a long file name in Windows 98
The "Attributes" field allows you to distinguish a fragment of a long name (value 0x0F) from a file descriptor. Old MS-DOS programs directory entries with an attribute field value of 0x0

NTFS file system
File system NTFS was designed for Windows NT. Features: · 64-bit addresses, ie. theoretically can support 264*216 bytes (1 208 925 819 M

Finding a file by name
When creating a file, the program calls the library procedure CreateFile("C:windowsreadmy.txt", ...) This call goes into the level n shared library

File Compression
If the file is marked as compressed, the system automatically compresses when writing, and decompresses when reading. Work algorithm: 1. The first 16 blocks of the file are taken for study (n

File encryption
Any information, if it is not encrypted, can be read by gaining access. Therefore, the most reliable protection of information from unauthorized access is encryption. Even if they steal from you

UNIX V7 file system
Although it is an old file system, the basic elements are still used by modern UNIX systems. Features: Filenames are limited to 14 ASCII characters, except for the slash "/&q"

i-node structure
Field Bytes Description Mode File type, security bits, setuid and setgid bits Nlinks

Creating and working with a file
fd=creat("abc", mode) - An example of creating an abc file with the protection mode specified in the mode variable (which users have access). The system is used

BSD file system
The basis is the classic UNIX file system. Features (difference from the previous system): Increased the length of the file name to 255 characters Reorganized directories

Location of the EXT2 file system on disk
Other features: · Block size 1 KB · Size of each i-node is 128 bytes. The i-node contains 12 direct and 3 indirect addresses, the length of the address in the i-node has become 4 bytes, which is about

EXT3 file system
Unlike EXT2, EXT3 is a journaling file system, i.e. will not fall into an inconsistent state after failures. But it is fully compatible with EXT2.

XFS file system
XFS is a journaling file system developed by Silicon Graphics but now released as open source. Official information at http://oss.sgi.com/projec

RFS file system
RFS (RaiserFS) is a journaling file system developed by Namesys. Official information on RaiserFS Some features: More efficient work

JFS File System
JFS (Journaled File System) is a journaled file system developed by IBM for the AIX operating system, but now released as open source. Official information on Journaled File S

Structure of NFS file system levels
VFS (Virtual File System) - virtual file system. Needed to manage the table of open files. The entries for each open file are called v-nodes

While many architectural features of the OS are directly related only to system programmers, the concept of multiple application (operational) facilities is directly related to the needs of end users - the ability of the operating system to run applications written for other operating systems. This property of the operating system is called compatibility.

Application compatibility can be at the binary level and at the source code level. Applications are usually stored in the OS as executable files containing binary images of code and data. Binary compatibility is achieved when you can take an executable program and run it on a different OS environment.

Source-level compatibility requires the appropriate compiler to be included in the software of the computer on which the application is to be run, as well as compatibility at the level of libraries and system calls. This requires recompilation of the source code of the application into a new executable module.

Source-level compatibility is important mainly for application developers who have these sources at their disposal. But for end users, only binary compatibility is of practical importance, since only in this case they can use the same product on different operating systems and on different machines.

The type of possible compatibility depends on many factors. The most important of them is the architecture of the processor. If the processor uses the same instruction set (perhaps with additions, as in the case of the IBM PC: standard set + multimedia + graphics + streaming) and the same address range, then binary compatibility can be achieved quite simply. For this, the following conditions must be met:

  • The API that the application uses must be supported by the given OS;
  • the internal structure of the application's executable file must match the structure of the OS's executable files.

If the processors have different architectures, then, in addition to the above conditions, it is necessary to organize emulation of the binary code. For example, command emulation is widely used Intel processor on a Macintosh Motorola 680x0 processor. The software emulator in this case sequentially selects the binary instruction of the Intel processor and executes the equivalent subroutine written in the instructions of the Motorola processor. Since the Motorola processor does not have exactly the same registers, flags, internal ALU, etc., as in Intel processors, it must also simulate (emulate) all these elements using its own registers or memory.

It's simple but very slow work, since one Intel instruction is executed much faster than the sequence of instructions of the Motorola processor that emulates it. The way out in such cases is to use the so-called application software environments or operating environments. One of the components of such an environment is the API set of functions that the OS exposes to its applications. To reduce the time for the execution of other people's programs, application environments imitate calls to library functions.

The effectiveness of this approach is due to the fact that most of today's programs run under GUI ( graphic interfaces user) such as Windows , MAC or UNIX Motif , with applications spending 60-80% of their time doing GUI functions and other OS library calls. It is this property of applications that allows application environments to compensate for the large time spent on command-by-command emulation of programs. A carefully designed software application environment contains libraries that mimic GUI libraries, but written in native code. Thus, a significant acceleration of the execution of programs with the API of another operating system is achieved. Otherwise, this approach is called translation - in order to distinguish it from the slower process of emulating one instruction at a time.

For example, for a Windows program running on a Macintosh, when interpreting commands from an Intel processor performance may be very low. But when a GUI function is called, a window is opened, etc., the OS module that implements the application Windows environment, can intercept this call and redirect it to a recompiled for the Motorola 680x0 processor to open a window. As a result, in such sections of the code, the speed of the program can reach (and, possibly, surpass) the speed of work on its own processor.

For a program written for one OS to run on another OS, it is not enough just to ensure API compatibility. The concepts underlying different operating systems may conflict with each other. For example, in one OS, an application may be allowed to control I / O devices, in another, these actions are the prerogative of the OS.

Each OS has its own resource protection mechanisms, its own error and exception handling algorithms, its own processor structure and memory management scheme, its own file access semantics, and its own graphical user interface. To ensure compatibility, it is necessary to organize a conflict-free coexistence within the same OS of several ways to manage computer resources.

There are various options for building multiple application environments, differing both in features of architectural solutions and in functionality that provide varying degrees of application portability. One of the most obvious options for implementing multiple application environments is based on the standard layered structure of the OS.

Another way to build multiple application environments is based on the microkernel approach. At the same time, it is very important to note the basic, common for all application environments, difference between the mechanisms of the operating system and the high-level functions specific to each of the application environments that solve strategic problems. In accordance with micronuclear architecture all OS functions are implemented by the microkernel and user-mode servers. It is important that the application environment is designed as a separate user-mode server and does not include the underlying mechanisms.

Applications, using the API, make system calls to the corresponding application environment through the microkernel. The application environment processes the request, executes it (perhaps by asking for help from the basic functions of the microkernel for this), and sends the result back to the application. During the execution of the request, the application environment, in turn, has to access the underlying OS mechanisms implemented by the microkernel and other OS servers.

This approach to designing multiple application environments has all the advantages and disadvantages of a micro-kernel architecture, in particular:

  • it is very easy to add and exclude application environments, which is a consequence of the good extensibility of micro-kernel operating systems;
  • if one of the application environments fails, the rest remain operational, which contributes to the reliability and stability of the system as a whole;
  • low performance of microkernel operating systems affects the speed of application tools, and hence the speed of applications.

As a result, it should be noted that the creation within one OS of several application tools for executing applications of different OS is a way that allows you to have a single version of the program and transfer it between different operating systems. Multiple application environments ensure binary compatibility of a given OS with applications written for other operating systems.

1.9. Virtual machines as a modern approach to the implementation of multiple application environments

The concept of "virtual machine monitor" (VMM) arose in the late 60s as a software abstraction level, which divided the hardware platform into several virtual machines. Each of these virtual machines (VMs) was so similar to the underlying physical machine that the existing software could be performed on it unchanged. At that time, general computing tasks were performed on expensive mainframes (such as the IBM /360), and users highly appreciated the ability of the VMM to allocate scarce resources among several applications.

In the 1980s and 1990s, the cost of computer equipment decreased significantly and effective multitasking OS, which reduced the value of the VMM in the eyes of users. Mainframes gave way to minicomputers, and then PCs, and the need for a VMM disappeared. As a result, computer architecture simply disappeared hardware for their effective implementation. By the end of the 80s, in science and in the production of VMMs, they were perceived only as a historical curiosity.

Today MVM is back in the spotlight. Intel, AMD, Sun Microsystems, and IBM are creating virtualization strategies, and labs and universities are developing virtual machine-based approaches to address mobility, security, and manageability issues. What happened between the resignation of the MVM and their revival?

In the 1990s, researchers at Stanford University began to explore the possibility of using virtual machines to overcome the limitations of hardware and operating systems. Problems arose with computers with massively parallel processing (Massively Parallel Processing, MPP), which were difficult to program and could not run existing operating systems. The researchers found that virtual machines could make this cumbersome architecture similar enough to existing platforms to take advantage of off-the-shelf operating systems. From this project came the people and ideas that became the gold mine of VMware (www.vmware.com), the first supplier of VMMs for mainstream computers.

Oddly enough, the development of modern operating systems and declining hardware costs led to problems that researchers hoped to solve with VMMs. The cheapness of equipment contributed to the rapid spread of computers, but they were often underused, requiring additional space and effort to maintain. And the consequences of growth functionality OS have become their instability and vulnerability.

To reduce the impact of system crashes and protect against hacks, system administrators turned back to single-tasking computational model(with one application on one machine). This resulted in additional costs due to increased hardware requirements. Moving applications from different physical machines to VMs and consolidating those VMs on a few physical platforms has improved hardware utilization, reduced management costs, and reduced floor space. Thus, the VMM's ability to multiplex hardware—this time in the name of server consolidation and utility computing—has brought them back to life.

At present, the VMM has become not so much a tool for organizing multitasking as it was once conceived, but a solution to the problems of ensuring security, mobility and reliability. In many ways, the VMM gives operating system developers the ability to develop functionality that is not possible with today's complex operating systems. Features such as migration and protection are much more convenient to implement at the level of VMMs that support backwards compatible when deploying innovative operating system solutions while maintaining previous achievements.

Virtualization is an evolving technology. In general terms, virtualization allows you to separate the software from the underlying hardware infrastructure. In fact, it breaks the connection between a certain set of programs and a specific computer. Virtual Machine Monitor separates software from hardware and forms an intermediate layer between software running virtual machines and hardware. This level allows the VMM to fully control the use of hardware resources. guest operating systems (GuestOS) that run on the VM.

The VMM creates a unified view of the underlying hardware so that physical machines from different vendors with different I/O subsystems look the same and the VM runs on any available hardware. Without worrying about individual machines, with their tight interrelationships between hardware and software, administrators can treat the hardware simply as a pool of resources to provide any on-demand service.

Thanks to full encapsulation software states on the VM, the VMM monitor can map the VM to any available hardware resources and even move it from one physical machine to another. The task of load balancing across a group of machines becomes trivial, and there are reliable ways to deal with hardware failures and grow the system. If you need to shut down a failed computer or bring a new one back online, the VMM is able to redistribute the virtual machines accordingly. The virtual machine is easy to replicate, allowing administrators to quickly provide new services as needed.

Encapsulation also means that the administrator can suspend or resume the VM at any time, as well as save Current state virtual machine or return it to a previous state. With universal undo capability, crashes and configuration errors can be easily dealt with. Encapsulation is the basis of a generalized mobility model, since a suspended VM can be copied over the network, stored and transported on removable media.

The VMM plays the role of an intermediary in all interactions between the VM and the underlying hardware, supporting the execution of many virtual machines on a single hardware platform and ensuring their reliable isolation. VMM allows you to assemble a group of VMs with low resource requirements on a single computer, reducing the cost of hardware and the need for production space.

Complete isolation is also important for reliability and safety. Applications that used to run on a single machine can now be distributed across different VMs. If one of them causes an OS crash as a result of an error, other applications will be isolated from it and continue to work. If one of the applications is threatened by an external attack, the attack will be localized within the "compromised" VM. Thus, the VMM is a tool for restructuring the system to improve its stability and security, without requiring additional space and administration efforts, which are necessary when running applications on separate physical machines.

The VMM must bind the hardware interface to the VM while retaining full control over the underlying machine and the procedures for interacting with its hardware. To achieve this goal, there are different methods based on certain technical compromises. When searching for such compromises, the main requirements for the VMM are taken into account: compatibility, performance and simplicity. Compatibility is important because the main advantage of a VMM is the ability to run legacy applications. Performance determines the amount of overhead for virtualization - programs on the VM must be executed at the same speed as on the real machine. Simplicity is necessary because the failure of the VMM will result in the failure of all the VMs running on the computer. In particular, reliable isolation requires that the VMM be free from bugs that attackers can use to destroy the system.

Instead of going through a complex code rewrite of the guest operating system, you can make some changes to the host operating system by changing some of the most "interfering" parts of the kernel. This approach is called paravirtualization. It is clear that in this case only the author can adapt the OS kernel, and, for example, Microsoft does not show any desire to adapt the popular Windows 2000 kernel to the realities of specific virtual machines.

In paravirtualization, the VMM developer redefines the interface of the virtual machine, replacing a subset of the original instruction set that is unsuitable for virtualization with more convenient and efficient equivalents. Note that although the OS needs to be ported to run on such VMs, most common applications can run unchanged.

The biggest disadvantage of paravirtualization is incompatibility. Any operating system, designed to run under the control of a paravirtualized VMM monitor, must be ported to this architecture, for which it is necessary to negotiate cooperation with OS vendors. In addition, legacy operating systems cannot be used, and existing machines cannot easily be replaced with virtual ones.

To achieve high performance and compatibility in x86 virtualization, VMware has developed a new virtualization method that combines traditional direct execution with fast, on-the-fly binary code translation. In most modern operating systems, the processor's modes of operation during the execution of ordinary application programs are easily virtualized, and therefore they can be virtualized through direct execution. Privileged modes unsuitable for virtualization can be executed by a binary code translator, correcting the "inconvenient" x86 commands. The result is a high performance virtual machine, which is fully compatible with the hardware and maintains full software compatibility.

The converted code is very similar to the results of paravirtualization. Ordinary instructions are executed unchanged, while instructions that require special processing (such as POPF and read code segment register instructions) are replaced by the translator with sequences of instructions that are similar to those required for execution on a paravirtualized virtual machine. However, there is an important difference: instead of changing source operating system or applications, the binary translator changes the code the first time it is executed.

While there are some additional costs involved in translating binary code, they are negligible under normal workloads. The translator processes only a part of the code, and the speed of program execution becomes comparable to the speed of direct execution - as soon as the cache is full.

Usage application software environments simplifies the task of running applications written for one OS on another . Basically, the application environment should include the functions of the program request interface, as well as the means of organizing conflict-free coexistence within the same OS of several ways to manage computer resources.

The application environment can be implemented as a regular application, and then it operates at the user level.

Rice. 2.8. Application environments that translate system calls

In another implementation of multiple application environments, the operating system has multiple peer application programming interfaces. In the fig. 2.9 example, the operating system supports applications written for OS1, OS2, and OS3. To do this, the application programming interfaces of all these operating systems are placed directly in the kernel space of the system: API OS1, API OS2 and API OS3.

The functions of each API are implemented by the kernel, taking into account the specifics of the corresponding OS, even if they have a similar purpose. In order for the kernel to select the desired implementation of a system call, each process must pass a set of identifying characteristics to the kernel.

Rice. 2.9. Implementing interoperability based on multiple peer APIs

conclusions

· All computer system software is divided into applied (for solving user problems) and system (for using computer hardware).

· The simplest structuring of the OS consists in dividing all OS components into modules that perform the main functions of the OS (kernel), and modules that perform auxiliary functions of the OS. Auxiliary OS modules are made out either in the form of applications (utilities and system processing programs), or in the form of libraries of procedures. Auxiliary modules are loaded into RAM only for the duration of their functions, that is, they are transitive. Kernel modules reside in random access memory, that is, they are resident.

· If there is hardware support for modes with different levels of authority, OS stability can be increased by executing kernel functions in privileged mode, and auxiliary OS modules and applications in user mode. This makes it possible to protect the codes and data of the OS and applications from unauthorized access. The OS can act as an arbiter in application disputes over resources.

Any operating system interacts with the hardware of the computer to solve its problems, namely: support for privileged mode and address translation, means for switching processes and protecting memory areas, the interrupt system and the system timer. This makes the OS machine dependent, tied to a specific hardware platform.



· Microkernel architecture is an alternative to the classical way of building an operating system, in accordance with which all the main functions of the operating system that make up the multilayer kernel are executed in privileged mode. In microkernel operating systems, only a very small part of the operating system remains in privileged mode. , called the microkernel. All other high-level kernel functions are packaged as user-mode applications.

· Application software environment - a set of OS tools designed to organize the execution of applications created for one OS in another. Each OS creates at least one application programming environment. The problem is to ensure the compatibility of several software environments within the same OS.

Emulation alternative - multiple application environments, which includes a set of API functions. They imitate the call to the library functions of the application environment, but in fact they call their internal libraries. It is called library translation. This is purely software.

In order for a program written under one OS to work under another, it is necessary to ensure conflict-free interaction of process control methods in different OS.

Ways to implement application software environments

Depending on the architecture:

1. Application software environment in the form of an application (the top layer of the core of the native OS).

User mode of operation, translation of system calls (API calls) into native OS calls. Corresponds to classic multilayer OS (Unix, Windows).

2. The presence of several application environments that function equally. Each in the form of a separate layer of the nucleus.

Privileged mode of operation. The API calls the functions of the underlying (privileged) layer of the OS. The task of recognizing and adapting the call falls on the system. Requires a large amount of resources. A set of identifying characteristics is passed to the kernel for recognition.

3. Micronuclear principle.

Any application environment is designed as a separate user-mode server. Applications, using the API, make system calls to the corresponding application environment through the microkernel. The application environment processes the request and returns the result through the microkernel. Could use microkernel functions. Multiple access to other resources is possible (while the microkernel is running).

OS interfaces

OS interface- it application system programming. Regulated by standards (POSIX, ISO).

1. User interface - is implemented using special software modules that translate user requests in a special command language into requests to the OS.

The set of such modules is called interpreter. It performs lexical and parsing and either executes the command itself or passes it to the API.

2. API- is intended to provide application programs with OS resources and implement other functions. The API describes a set of functions, procedures that belong to the kernel and OS add-ons. The API uses system programs both within the OS and outside of it, using application programs through a programming environment.

At the heart of providing the OS with resources is ultimately a software interrupt. Their implementation depending on the system (vector, tabular). There are several options for implementing the API at the OS level (fastest, lowest), at the level system programming(more abstract, less fast) and at the level of an external library of procedures and functions (small set).

Linux OS interfaces:

software (without intermediaries - the actual execution of system calls);

· command line(intermediary - a shell of the Shell interpreter that redirects the call);

Graphical (intermediaries - Shell + graphical shell).

File system

File system is part of the operating system designed to provide users with user-friendly interface working with files and ensuring the use of files stored on external media (hard disk + RAM) by several users and processes.

According to the composition of the FS:

the totality of all files on the disk on all media,

sets of data structures used to manage files, such as file directories, file descriptors, free and used disk space allocation tables,

a complex of systemic software tools that implement file management, in particular: creation, destruction, reading, writing, naming, searching and other operations on files.

One of the attributes of files is filenames, a way of identifying a file to the user. In those systems where multiple names are allowed, the file is assigned an inode used by the OS kernel. Names in different operating systems are set differently.

Liked the article? Share with friends!
Was this article helpful?
Yes
Not
Thanks for your feedback!
Something went wrong and your vote was not counted.
Thank you. Your message has been sent
Did you find an error in the text?
Select it, click Ctrl+Enter and we'll fix it!