Hardware and software setup

Applied environments. Scheme for developing application programs in the environment of a typical programming system Application programming environment of the scheme

While many architectural features of the OS are directly related only to system programmers, the concept of multiple application (operational) means is directly related to the needs of end users - the ability to operating system run applications written for other operating systems. This property of the operating system is called compatibility.

Application compatibility can be at the binary level and at the source code level. Applications are usually stored in the OS in the form executable files containing binary images of codes and data. Binary compatibility is achieved when you can take an executable program and run it on a different OS environment.

Source-level compatibility requires the presence of an appropriate compiler as part of software computer on which to run this application, as well as compatibility at the level of libraries and system calls. This requires recompilation of the source code of the application into a new executable module.

Source-level compatibility is important mainly for application developers who have these sources at their disposal. But for end users, only binary compatibility is of practical importance, since only in this case they can use the same product on different operating systems and on different machines.

The type of possible compatibility depends on many factors. The most important of them is the architecture of the processor. If the processor uses the same instruction set (perhaps with additions, as in the case of the IBM PC: standard set + multimedia + graphics + streaming) and the same address range, then binary compatibility can be achieved quite simply. For this, the following conditions must be met:

  • The API that the application uses must be supported by the given OS;
  • the internal structure of the application's executable file must match the structure of the OS's executable files.

If the processors have different architectures, then, in addition to the above conditions, it is necessary to organize emulation of the binary code. For example, emulation of Intel processor instructions on the Motorola 680x0 processor of the Macintosh is widely used. The software emulator in this case sequentially selects the binary instruction of the Intel processor and executes the equivalent subroutine written in the instructions of the Motorola processor. Since the Motorola processor does not have exactly the same registers, flags, internal ALU, etc., as in Intel processors, it must also simulate (emulate) all of these elements using its own registers or memory.

It's simple but very slow work, since one Intel instruction is executed much faster than the sequence of instructions of the Motorola processor that emulates it. The way out in such cases is to use the so-called application programs environments or operating environments. One of the components of such an environment is the API set of functions that the OS exposes to its applications. To reduce the time for the execution of other people's programs, application environments imitate calls to library functions.

The effectiveness of this approach is due to the fact that most of today's programs run under GUI (graphical user interfaces) such as Windows , MAC or UNIX Motif , while applications spend 60-80% of the time doing GUI functions and other OS library calls. It is this property of applications that allows application environments to compensate for the large time spent on command-by-command emulation of programs. A carefully designed software application environment contains libraries that mimic GUI libraries, but written in native code. Thus, a significant acceleration of the execution of programs with the API of another operating system is achieved. Otherwise, this approach is called translation - in order to distinguish it from the slower process of emulating one instruction at a time.

For example, for a Windows program running on a Macintosh, when interpreting commands from an Intel processor performance may be very low. But when a GUI function is called, a window is opened, etc., the OS module that implements the Windows application environment can intercept this call and redirect it to the window opening routine recompiled for the Motorola 680x0 processor. As a result, in such sections of the code, the speed of the program can reach (and, possibly, surpass) the speed of work on its own processor.

For a program written for one OS to run on another OS, it is not enough just to ensure API compatibility. The concepts underlying different operating systems may conflict with each other. For example, in one OS, an application may be allowed to control I / O devices, in another, these actions are the prerogative of the OS.

Each OS has its own resource protection mechanisms, its own error and exception handling algorithms, special processor structure and memory management scheme, its own file access semantics and graphics user interface. To ensure compatibility, it is necessary to organize a conflict-free coexistence within the same OS of several ways to manage computer resources.

Exists various options building multiple application environments, differing both in features of architectural solutions and in functionality, providing a different degree of application portability. One of the most obvious options for implementing multiple application environments is based on the standard layered structure of the OS.

Another way to build multiple application environments is based on the microkernel approach. At the same time, it is very important to note the basic, common for all application environments, difference between the mechanisms of the operating system and the high-level functions specific to each of the application environments that solve strategic problems. In accordance with micronuclear architecture all OS functions are implemented by the microkernel and user-mode servers. It is important that the application environment is designed as a separate user-mode server and does not include the underlying mechanisms.

Applications, using the API, make system calls to the corresponding application environment through the microkernel. The application environment processes the request, executes it (perhaps by asking for help from basic functions microkernel) and sends the result to the application. During the execution of the request, the application environment, in turn, has to access the underlying OS mechanisms implemented by the microkernel and other OS servers.

This approach to designing multiple application environments has all the advantages and disadvantages of micro-kernel architecture, in particular:

  • it is very easy to add and exclude application environments, which is a consequence of the good extensibility of micro-kernel operating systems;
  • if one of the application environments fails, the rest remain operational, which contributes to the reliability and stability of the system as a whole;
  • low performance of microkernel operating systems affects the speed of work application tools, and hence the speed of applications.

As a result, it should be noted that the creation within one OS of several application tools for executing applications of different OS is a way that allows you to have a single version of the program and transfer it between different operating systems. Multiple application environments ensure binary compatibility of a given OS with applications written for other operating systems.

1.9. Virtual machines as a modern approach to the implementation of multiple application environments

The concept of "virtual machine monitor" (VMM) arose in the late 60s as a software abstraction level, which divided the hardware platform into several virtual machines. Each of these virtual machines (VMs) was so similar to the underlying physical machine that the existing software could be performed on it unchanged. At that time, general computing tasks were performed on expensive mainframes (such as the IBM /360), and users highly appreciated the ability of the VMM to allocate scarce resources among several applications.

In the 1980s and 1990s, the cost of computer equipment decreased significantly and effective multitasking OS, which reduced the value of the VMM in the eyes of users. Mainframes gave way to minicomputers, and then PCs, and the need for a VMM disappeared. As a result, computer architecture simply disappeared hardware for their effective implementation. By the end of the 80s, in science and in the production of VMMs, they were perceived only as a historical curiosity.

Today MVM is back in the spotlight. Intel, AMD, Sun Microsystems, and IBM are creating virtualization strategies, and labs and universities are developing virtual machine-based approaches to address mobility, security, and manageability issues. What happened between the resignation of the MVM and their revival?

In the 1990s, researchers at Stanford University began to explore the possibility of using virtual machines to overcome the limitations of hardware and operating systems. Problems arose with computers with massively parallel processing (Massively Parallel Processing, MPP), which were difficult to program and could not run existing operating systems. The researchers found that virtual machines could make this cumbersome architecture similar enough to existing platforms to take advantage of off-the-shelf operating systems. From this project came the people and ideas that became the gold mine of VMware (www.vmware.com), the first supplier of VMMs for mainstream computers.

Oddly enough, the development of modern operating systems and declining hardware costs led to problems that researchers hoped to solve with VMMs. The low cost of equipment contributed rapid spread computers, but they were often underused, requiring additional space and effort to maintain. And the consequences of growth functionality OS have become their instability and vulnerability.

To reduce the impact of system crashes and protect against hacks, system administrators turned back to single-tasking computational model(with one application on one machine). This resulted in additional costs due to increased hardware requirements. Moving applications from different physical machines to VMs and consolidating those VMs on a few physical platforms has improved hardware utilization, reduced management costs, and reduced floor space. Thus, the VMM's ability to multiplex hardware—this time in the name of server consolidation and utility computing—has brought them back to life.

At present, the VMM has become not so much a tool for organizing multitasking as it was once conceived, but a solution to the problems of ensuring security, mobility and reliability. In many ways, the VMM gives operating system developers the ability to develop functionality that is not possible with today's complex operating systems. Features such as migration and protection are much more convenient to implement at the level of VMMs that support backwards compatible when deploying innovative operating system solutions while maintaining previous achievements.

Virtualization is an evolving technology. In general terms, virtualization allows you to separate the software from the underlying hardware infrastructure. In fact, it breaks the connection between a certain set of programs and a specific computer. Virtual Machine Monitor separates software from the hardware and forms an intermediate level between software running virtual machines, and hardware. This level allows the VMM to fully control the use of hardware resources. guest operating systems (GuestOS) that run on the VM.

The VMM creates a unified view of the underlying hardware so that physical machines from different vendors with different I/O subsystems look the same and the VM runs on any available hardware. By not caring about individual machines with their tight hardware-software interconnections, administrators can simply view the hardware as a pool of resources to provide any on-demand service.

Thanks to full encapsulation software states on the VM, the VMM monitor can map the VM to any available hardware resources and even move it from one physical machine to another. The task of load balancing across a group of machines becomes trivial, and there are reliable ways to deal with hardware failures and grow the system. If you need to shut down a failed computer or bring a new one back online, the VMM is able to redistribute the virtual machines accordingly. The virtual machine is easy to replicate, allowing administrators to quickly provide new services as needed.

Encapsulation also means that the administrator can suspend or resume the VM at any time, as well as save Current state virtual machine or return it to a previous state. With universal undo capability, crashes and configuration errors can be easily dealt with. Encapsulation is the basis of a generalized mobility model, since a suspended VM can be copied over the network, stored and transported on removable media.

The VMM plays the role of an intermediary in all interactions between the VM and the underlying hardware, supporting the execution of many virtual machines on a single hardware platform and ensuring their reliable isolation. VMM allows you to assemble a group of VMs with low resource requirements on a single computer, reducing the cost of hardware and the need for production space.

Complete isolation is also important for reliability and safety. Applications that used to run on a single machine can now be distributed across different VMs. If one of them causes an OS crash as a result of an error, other applications will be isolated from it and continue to work. If one of the applications is threatened by an external attack, the attack will be localized within the "compromised" VM. Thus, the VMM is a tool for restructuring the system to improve its stability and security, without requiring additional space and administration efforts, which are necessary when running applications on separate physical machines.

The VMM must bind the hardware interface to the VM while retaining full control over the underlying machine and the procedures for interacting with its hardware. To achieve this goal, there are different methods based on certain technical compromises. When searching for such compromises, the main requirements for the VMM are taken into account: compatibility, performance and simplicity. Compatibility is important because the main advantage of a VMM is the ability to run legacy applications. Performance determines the amount of overhead for virtualization - programs on the VM must be executed at the same speed as on the real machine. Simplicity is necessary because the failure of the VMM will result in the failure of all the VMs running on the computer. In particular, reliable isolation requires that the VMM be free from bugs that attackers can use to destroy the system.

Instead of going through a complex code rewrite of the guest operating system, you can make some changes to the host operating system by changing some of the most "interfering" parts of the kernel. This approach is called paravirtualization. It is clear that in this case only the author can adapt the OS kernel, and, for example, Microsoft does not show any desire to adapt the popular Windows 2000 kernel to the realities of specific virtual machines.

In paravirtualization, the VMM developer redefines the interface of the virtual machine, replacing a subset of the original instruction set that is unsuitable for virtualization with more convenient and efficient equivalents. Note that although the OS needs to be ported to run on such VMs, most common applications can be performed unchanged.

The biggest disadvantage of paravirtualization is incompatibility. Any operating system, designed to run under the control of a paravirtualized VMM monitor, must be ported to this architecture, for which it is necessary to negotiate cooperation with OS vendors. In addition, legacy operating systems cannot be used, and existing machines cannot easily be replaced with virtual ones.

To achieve high performance and compatibility in x86 virtualization, VMware has developed new method virtualization that combines traditional direct execution with fast binary translation on the fly. In most modern operating systems, the processor's modes of operation during the execution of ordinary application programs are easily virtualized, and therefore they can be virtualized through direct execution. Privileged modes unsuitable for virtualization can be executed by a binary code translator, correcting the "inconvenient" x86 commands. The result is a high performance virtual machine, which is fully compatible with the hardware and maintains full software compatibility.

The converted code is very similar to the results of paravirtualization. Ordinary instructions are executed unchanged, while instructions that require special processing (such as POPF and read code segment register instructions) are replaced by the translator with sequences of instructions that are similar to those required for execution on a paravirtualized virtual machine. However, there is an important difference: instead of changing source operating system or applications, the binary translator changes the code when it is executed for the first time.

Although there are some additional costs involved in translating binary code, they are negligible under normal workloads. The translator processes only part of the code, and the speed of program execution becomes comparable to the speed of direct execution - as soon as the cache is full.

Application programs are the most numerous class of computer.

Application software designed to enable the use computer science in various fields of human activity.

Application programs- programs designed to solve specific user problems.

One of the possible classification options.

Classification of application software by purpose

Text editor - a program designed only for viewing, entering and editing text.

word processor - a program that provides the ability to enter, edit and format text, as well as insert objects of a non-textual nature (graphic, multimedia, etc.) into a text document.

All text editors save "clean" text in the file and thanks to this compatible together.

Different word processors write formatting information to a file in different ways and therefore incompatible together.

Main components of a word processor:

  • Font set.
  • Spellchecking.
  • Preview of printed pages.
  • Consolidation of documents, multi-window.
  • Auto-formatting and auto-transfer.
  • Standard tools.
  • Spreadsheet editor and calculator.
  • Inserting graphic objects.

Examples - MS Word, Write, WordPerfect, Ami Pro, MultiEdit, Lexicon, Refis

Publishing systems - necessary for the preparation of documents of typographical quality, computer layout (combining text and graphics into a book, magazine, brochure or newspaper).

Examples - Corel Ventura, QuarkXPress, Adobe PageMaker, MS Publisher, FrameMaker

Graphic Information- information or data presented in the form of diagrams, sketches, images, graphs, diagrams, symbols.

Graphics editor - a program for creating, editing, viewing graphic images.

The main components of the graphic editor:

  • A set of fonts, working with text.
  • Standard tools.
  • Picture library.
  • Combining pictures.
  • Special effects.

Distinguish everything three kinds computer graphics . This raster graphics, Vector graphics and fractal graphics. They differ in the principles of image formation when displayed on a monitor screen or when printed on paper.

Raster graphics are used in the development of electronic (multimedia) and printing publications.

Illustrations made with raster graphics are rarely created manually using computer programs. More often, illustrations prepared by the artist on paper or photographs are scanned for this purpose. Recently, digital photo and video cameras have been widely used to enter raster images into a computer.

Fractal graphics are rarely used to create printed or electronic documents, but it is often used in entertainment programs

Examples - Paint, PaintBrush, CorelDraw, MS PhotoEditor, Adobe Photoshop 3D MAX Studio

DBMS(database management system) - designed to automate the procedures for creating, storing and retrieving electronic data (processing of information arrays).

Examples - dBase, Paradox, MS Access, Oracle, FoxPro

Integrated systems there are two types

  • Traditional (fully connected) application packages (APPs).
  • Application packages with object-related integration

Traditional RFP

Integrated software package is a multifunctional stand-alone package that combines the functions and capabilities of various specialized (problem-oriented) packages into one whole. In these programs, the functions of the text editor, DBMS and spreadsheet processor are integrated. In general, the cost of such a package is much lower than the total cost of similar specialized packages.

The package provides a link between the data, however, at the same time, the capabilities of each component are narrowed compared to a similar specialized package.

A typical situation is when the data received from the database needs to be processed by means of a spreadsheet processor, presented graphically, and then inserted into the text. To perform this type of work, there are so-called. integrated packages - software tools that combine the features that are individually characteristic of text editors, graphic systems, spreadsheets, databases and others software tools. Of course, this combination of possibilities is achieved at the expense of a compromise. Some features in the integrated packages are limited or not fully implemented. This concerns, first of all, the wealth of commands for processing the database and spreadsheet, their sizes, and macro languages. However, the advantages created by a single interface combined in an integrated software package are undeniable.

Known packages are Open Access by Open Access, FrameWork by Ashton-Tate, Lotus 1-2-3 and Symphony by Lotus Development Corporation, Lotus Works.

PPP with object-related integration

This is the unification of specialized packages within a single resource base, ensuring the interaction of applications (package programs) at the object level and a single simplified center-switch between programs.

Integration involves giving the components of the complex uniformity in terms of their perception and methods of working with them. Consistency of interfaces is implemented on the basis of common icons and menus, dialog boxes etc. Ultimately, this contributes to an increase in labor productivity and a reduction in the development period.

A feature of this type of integration is the use of shared resources. Types of resource sharing:

  • use of utilities common to all programs of the complex (spell check);
  • use of objects that can be shared by several programs;

When it comes to sharing objects across applications, there are two main standards:

  • dynamic linking and embedding of Object Linking and Embedding OLE objects by Microsoft;
  • OpenDoc (open document) by Apple, Boriartd, IBM, Novell and WordPerfect.

The dynamic object linking mechanism allows the user to place information created by one application program into a document generated by another. In this case, the user can edit information in the new document by means of the program with which this object was created.

Also, this mechanism allows you to transfer OLE-objects from the window of one application to the window of another.

OpenDoc is an object-oriented system based on open standards of the participating companies. The object model is the Distributed System Object Model (DSOM), developed by IBM for OS/2.

  • implementation simple method transition from one application to another;
  • availability of automation tools for working with the application (macro language).

Examples: Borland Office for Windows, Lotus SmartSute for Windows, MS Office.

Expert system - system artificial intelligence, built on the basis of deep special knowledge about some narrow subject area(received from experts - specialists in this field). ES are designed to solve problems with uncertainty and incomplete initial data, requiring expert knowledge for their solution. In addition, these systems must be able to explain their behavior and their decision. Their distinctive feature is the ability to accumulate knowledge and experience of qualified specialists (experts) in any field. Using this knowledge, ES users who do not have the necessary qualifications can solve their problems almost as successfully as experts do. This effect is achieved due to the fact that the system in its work reproduces approximately the same chain of reasoning as a human expert.

The fundamental difference expert systems from other programs is their adaptability, i.e. variability in the process of self-learning.

It is customary to distinguish three main modules in the ES: knowledge base module, inference module, user interface.

Expert systems are used in various areas of human activity - science (classification of animals and plants by species, chemical analysis), medicine (diagnosis, analysis of electrocardiograms, determination of treatment methods), technology (troubleshooting in technical devices, tracking the flight of spacecraft and satellites), in geological exploration, in economics, in political science and sociology, forensic science, linguistics, and many others. There are both highly specialized ES and "shells", using which, without being a programmer, you can create your own ES.

Hypertext- this is a form of organizing textual material not in a linear sequence, but in the form of indications of possible transitions (links), links between its individual fragments. In hypertext systems, information resembles the text of an encyclopedia, and access to any selected fragment of text is carried out arbitrarily via a link. The organization of information in hypertext form is used to create reference manuals, dictionaries, context help in application programs.

multimedia systems - programs that provide the interaction of visual and audio effects under the control of interactive software.

workstation- Automated workplace.

ASNI– automated systems of scientific research.

ACSautomated system management.

User applications are created by the user using the programming tools available to him as part of a particular computing environment. In this case, the creation and debugging of programs are carried out by each user individually, in accordance with the rules and agreements of the PPP or OS in which they are applied.

The concept of micronuclear architecture

Binary and Source Compatibility

Binary compatibility is a type of program compatibility that allows the program to work in different environments without changing its executable files.

This term is often used in the sense of "operating system compatibility", and in this case means the ability of an already compiled version of a program for one operating system to work on another operating system without recompilation. Binary compatibility includes byte-by-byte compatibility of loading fields, complete identity of the mechanism for calling functions, passing variables and receiving calculation results, and full implementation of the programming interface. At the same time, technically, the implementation can be completely different - the main thing is that all calls are implemented and that they lead to the expected result, and how this result is achieved is decided by the creators of the program.

Source-level compatibility requires an appropriate compiler in the software, as well as compatibility at the level of libraries and system calls. This requires recompiling the existing source code into a new executable module.

The microkernel architecture is an alternative to the classical way of building an operating system, in accordance with which all the main functions of the operating system that make up the multilayer kernel are executed in privileged mode. In microkernel operating systems, only a very small part of the operating system, called the microkernel, remains running in privileged mode. All other high-level kernel functions are packaged as user-mode applications. Microkernel operating systems satisfy most of the requirements for modern operating systems, being portable, extensible, reliable, and creating a good prerequisite for supporting distributed applications. These benefits come at the cost of reduced performance, which is the main drawback of the microkernel architecture.

One of the more obvious options for implementing multiple application environments is based on the standard layered structure of the OS.

OS1 supports OS2 and OS3 applications in addition to its applications. For this, it contains special applications, application software environments that translate the interfaces of foreign OS API SC2 and API OS3 into the interface of their native OS API OS1.

Another implementation of multiple application environments assumes the presence in the OS of several peer application programming interfaces.

The application programming interfaces of all operating systems are located in the kernel space of the system.



The API level functions access the functions of the underlying OS level, which must support 3 (in this case) incompatible environments.

The functions of each API are implemented by the kernel, taking into account the specifics of the corresponding OS, even if they have a similar purpose.

Another way to build multiple application environments is based on the microkernel approach. At the same time, it is important to separate the basic, common for all application environments, OS mechanisms from specific ones.

In accordance with the microkernel architecture, all OS functions are implemented by the microkernel and user-mode servers.

It is important that each application environment is designed as a separate user-mode server and does not include the underlying mechanisms.

The application uses the API to access system calls to the corresponding application environment through the microkernel.

The application environment forms a request, executes it, and sends the result to the application. During the execution of a request, the application environment has to access the underlying OS mechanisms implemented by the microkernel and other OS servers.

This approach to designing multiple application environments has all the advantages and disadvantages of a microkernel architecture.

Several application programs combined to solve one user task are called an application or application environment. These are graphic and text editors, spreadsheet processing systems, database management systems, communication programs, etc.

The application environment is computer environment generated by application programs. as convenient and widespread software applications to work with various types data are application programs Microsoft office designed to work in Windows environment. important the dignity of Windows– applications is visibility. First, all tools of the environment First, all tools of the environment available to the user can be represented graphically in the form of command buttons located on a special panel. Tools are the commands of the main menu that allow the user to perform actions on the objects of the application environment. A graphic image of the tool is placed on the command buttons. Currently, the images on the buttons are standardized, so we can talk about a special computer notation language. Each environment has a set of standard tools such as Open, Save, Delete, Undo, Copy, Paste. Buttons with these tools are placed on a panel called Standard panel. But the application environment also has its own specific tools. Graphic images have also been developed for them.

Secondly, documents created in applications are displayed on the screen exactly as they will be printed on paper. This is especially important when you know in advance what format the final document should be.

Multitasking. Another distinctive feature of Windows applications is multitasking. Several documents created by different applications can be opened on the desktop at the same time. You can simultaneously edit a drawing, write a letter and make calculations. However, the concept of simultaneity needs to be clarified. All of the above tasks can be launched for execution. After launch, all of them will be placed in random access memory computer at the same time. The user himself cannot use the same information perception organ for two different tasks at the same time. You cannot, for example, read text and draw at the same time. The human organs of vision, the eyes, are not adapted for this. Accordingly, in such cases, a person works consistently with documents in environments, for example, first draws, then writes. However, if different organs of perception of information are involved in each of the tasks, then these tasks can actually be performed simultaneously. For example, if you start a laser disc player and word processor, then you can simultaneously listen to music and type text, using your hearing and vision, respectively.

Organization of data exchange. Another important feature of Windows application environments is the ability to share data between applications. The system environment provides two different ways to exchange data between applications: through the clipboard and through OLE technology.

The exchange through the buffer allows either to move the document object to a new location, or to place a copy of the object in a new location or in a new document. Buffer exchange allows you to transfer objects and their copies from document to document without maintaining a connection with the application in which this object was created.

The exchange through the buffer is performed in two stages. At the first stage, either the object itself or its copy is placed in the buffer. At the second stage, the object from the clipboard is inserted into the selected document.

OLE technology provided by software environment Windows maintains constant contact between the application environment where the object is embedded and the application environment where the object was created. The use of OLE technology is effective in cases where the same object is used in different documents. For example, a company logo was created using a text editor. Then, when creating various documents (certificate, letter, conclusion of an act, etc.), you can use this emblem. Then the logo was changed. In the event that the emblem was placed in documents through the clipboard, you will have to insert it into each document again. If it was implemented using OLE technology, then the logo will be updated in all related documents automatically after editing the source file with the logo.

Creation of compound documents. The organization of data exchange between application environments ensures their integration. The integration of application environments is understood as such a combination of them, when it becomes possible sharing objects in each of these environments. For example, you need to make a certificate for a group of employees of the sales department and include their photos in the certificate. The basis of the report, obviously, will be a text document. In addition, there is a database of employees, where data is searched for employees of the sales department. The search result (selection) is placed in a text document. Photos are also posted there. The result is a text document, which, in addition to its own objects, contains a selection from the database and photographs. Such a document is called a composite (integrated).

Application Environment Interface. Applications running in the Windows environment have a very similar GUI. Application environment interfaces consist of elements of the same type in their purpose. In the interface of each of them, four zones can be distinguished (Figure 2.1):

The title bar of the application environment, which contains the application's window interface controls and displays the name of the environment;

Management zone, where application and document management tools are located;

Working field where edited documents are placed;

Help zone, which contains information about the application's operating modes and hints to the user.

Figure 2.1 - Structural parts of the application interface

All programs created for Windows have a standard window interface. They form the same type of reference zones and control zones. The view of the working field changes depending on the purpose of the application environment.

When starting any application environment, the screen displays application window, that is, the environment itself. Usually inside the application window immediately opens and document window. This can be a new document or a document that was last edited. If the application was called through the launch of a document, then this document will be located in the application window.

The application environment interface includes the following elements: application environment title bar, main menu bar, toolbars, input and editing bar, status bar.

Title bar includes: system menu button, application name (for example Microsoft Excel), button Collapse, button Expand/Restore and button Close.

Main menu application environment, just like any other Windows program, looks like a nesting doll. Top-level sections are indicated on the main menu bar. In each of these sections, lower-level commands are grouped according to their purpose. The list of these commands opens as a drop-down menu. Access to some of these commands leads in turn to the appearance of an additional submenu of an even lower level. Thus, with the help of the main menu, the required control command is sequentially selected and all the parameters necessary for its implementation are set.

Toolbar(pictographic menu) contains a set of buttons (icons) set by the user, designed for faster (compared to multilevel menu) calling the control commands included in the main menu.

The interface of the spreadsheet processor and the database management system also includes an entry and editing line. The input and edit bar displays formulas or data entered into the current table cell or database field. In this row, you can view or edit the contents of this cell or field, as well as see the formula itself.

Status bar contains information about the operating modes of the application. In addition to those already listed, there is a group of elements that can be conditionally called the auxiliary control area. These include: the title bar of the document window, generated by the application, as well as scrollbars.

In the title bar document window shows the file name of the document being edited by the selected application. If the document window is maximized to maximum size, then the title bar of the document is aligned with the title bar of the application.

Scroll bars necessary to view those areas of the document that are in this moment are not visible (on the screen in the document window, only a part of it, called the working field, is visible). An interface element that allows text to move vertically is called a vertical scroll bar, and horizontal movement is called a horizontal scroll bar. They operate in exactly the same way as in any other Windows window.

Document editing. When working in an application environment, it often becomes necessary to make changes to previously created documents. With the help of application programs, you can not only create documents, as was possible on a typewriter, but also make further changes, for example, make corrections, eliminate errors, search and replace individual values. All operations related to making changes to a document and correcting errors in it are combined into a common concept - editing. Editing is the process of making changes to a document.

You can edit not only text documents, but also tables, databases, drawings. For example, if the activity is related to the performance of calculations, there is no need to recalculate huge tables. It is enough to change only the original figures, and the recalculation of the totals spreadsheet processor will execute on its own.

When editing, you must:

1. Select an object.

2. Execute command or editing actions.

Selecting an object. Before performing any actions on objects in a document, it must be selected. As a rule, when displaying selected objects on the screen, the color of the object is reversed or the outer border of the object is shown. Typically, objects are selected with a mouse click. Often there is a need to select a group of similar, consecutive block objects, for example, a phrase in a sentence or several cells in a table. In this case, the mouse, with the left button pressed, is dragged from the first object to the last.

Document formatting. Any document must be beautifully and professionally designed. For instance. A table has been created, but it is too wide to fit on the page. Either reduce the table or expand the page. Each application environment has a set of operations that allow you to perform external design document as required. All operations for the design of the entire document as a whole or its objects are united by a common concept - formatting.

Formatting - Presentation Process appearance document or its individual objects in the required form.

At the same time, one should take into account the environment in which the object was created, as this will determine the work tools used.

Characteristics of the application environment tools. Working on a computer with a particular document, a person uses application programs that form this application environment as tools. Each application environment has tools that provide the user with a document. You can work with these tools using the buttons on the toolbar or by executing commands from various menus. Application Environment Tools- all means of influence of the application environment on the objects of the document and the document itself.

Tools differ among themselves, first of all, according to their purpose. For example, some tools are designed to work with files, while others are designed to process data in an application. Management of all tools of the application environment is carried out using the commands of the main menu. The names of these commands are usually the same as the names of the corresponding tools. Commands according to their purpose are combined into groups called menu items (for example, menu items File, Edit, Insert, Service). Menu items form the top level of the main menu (figure 2.2).

Figure 2.2 - The main menu of the application environment

Such a menu is called multi-level, as it contains commands grouped according to their purpose. Each group is opened by clicking on its name. After that, you can go to the next menu level and select the desired command from the group. In some cases, a lower-level submenu opens. At the lowest level, it is often necessary to refine the command parameters by specifying the required values ​​in the dialog box that opens. The figure shows only those menu names that are common to all applications.

File. This menu item combines commands for working with files and documents in general. With it, you can create new file, open an existing one, save the edited file or its copy under a different name and/or location, set page parameters, print the edited file.

The document created by the application environment can be in various forms:

In screen form, that is, in the form of a document displayed on the monitor screen with objects embedded in it;

In the form of a hard copy, that is, in the form of a printout of the created document on a printer;

In electronic form - in the form of a file saved on disk.

The result of working with any application program must be saved in a file on disk. Without this, it is impossible to continue working or transfer the created document to another computer.

Edit. Usually the command list of this item is opened by the forward or backward command. This section of the menu also includes commands for editing the contents of document objects. Despite the fact that these objects are different for each application environment, the data exchange mechanism common to the application environment allows you to apply the same type of operations for all objects. To copy and move various objects, as already mentioned, the clipboard or OLE technology is used. With their help, you can integrate data from various application environments.

It should be noted that this mechanism allows data exchange not only within a single document, but also between different application environments.

Insert. This section of the menu contains commands for inserting (embedding) into a document various objects created in any application environment.

Format. This menu item contains commands that format document objects created in this application. Usually the command names are the same as the names of the objects to be formatted: Cells..., Rows, Columns, Font..., Paragraph... etc.

In addition to commands for formatting specific objects, there are also commands that define styles and auto-format.

Under style refers to a set of formatting options for a document object. AutoFormat assigns formatting options to all document objects and the document as a whole.

View. This menu item is for selecting various ways displaying the document on the screen, setting the display of the tools used, adding headers and footers, changing the scale of the document display on the screen, etc.

Service. This menu provides additional features application environment. These capabilities are provided by running auxiliary programs application environment, such as a spell checker. This program can be used not only text editor, but also other Windows applications.

Another example is the address book. This program stores the data of people with whom you often have to deal. It can be used to forward your document to an address stored in your address book, or to insert a message into a document intended for a specific person.

Window. This menu should be accessed when working with several documents in different windows at the same time to configure them and move from one window to another.

reference. This menu item is used to get help on all the tools in the current application environment.

Activity planning and communication support

Microsoft program Outlook is designed for organizing documents and scheduling tasks, including sending mail, scheduling appointments, events and meetings, maintaining a contact list and a task list, and keeping track of all the work done.

Software environment Microsoft Outlook has replaced various types of notebooks and notebooks used by managers and secretaries to organize their work. So, for storing information about various people and organizations, telephone books were used, for planning daily meetings and affairs - weekly journals, for temporary records - notepads. In addition to the listed types of notebooks, work plans were drawn up for one week, one month, a year, etc.

Information is organized in the form of folders, which are similar in purpose to their paper predecessors. Convenient ways of presenting information, searching for it, reminders offered by the Outlook environment can help you organize your work efficiently. The Outlook environment can be used by both the manager and the secretary, and other employees.

Figure 2.3 shows the main window of the Outlook software environment. On the left side of the window is the Outlook panel, which contains the main objects that the environment works with. Objects are folders with information of a certain type. These objects are grouped into groups: Outlook, Mail, Other folders. The main pieces of information that the Outlook environment works with are folders. Contacts, Calendar, Tasks, Notes, Diary.

The Contacts folder is a repository of information and data about people with whom an organization has business and personal relationships. These people can be both employees of this organization and employees of other firms. In the folder Contacts may store: e-mail address, postal address, several phone numbers and other information related to the contact person, such as a birthday or anniversary of an event. Folder Based Contacts formed The address book to send email.

Figure 2.3 - Outlook program window

In the Outlook environment, all events are divided into several groups: appointments, meetings, events, tasks, phone calls (Figure 2.4).

Meetings are events for which time is reserved in the calendar. No one is invited to the meetings, no resources are involved for them. Resources mean the allocation of a special room, time costs associated with preparation, material costs.

Meeting- this is a meeting with the invitation of persons or attraction of resources. Event is an all-day event to which other people may or may not be invited. In the Outlook environment, you can schedule appointments, meetings, and events and set the time for them The calendar.

Figure 2.4 - Types of events

Task- This is a task that must be completed by a certain date, associated with significant time costs.

The folder is used to describe information about the task and organize the solution of tasks. Tasks.

Phone call - an event related to resolving issues by phone and not requiring direct contact.

Phone calls, as well as all the work on creating and processing various documents on the computer is recorded in the folder Diary.

The presented folder system allows a business person to organize the planning of his working time and track the time spent on work.

Another folder group including folders Inbox, Outbox, Drafts and Sent Items, designed to organize e-mail exchange with work partners.

The main information elements of these folders are messages. Message- a document sent or received by e-mail. Folder inbox designed to receive messages. Folders outgoing and drafts designed to prepare messages for sending. Folder Sent designed to save sent messages.

The main actions that you can perform on items in the Outlook environment are:

Create;

Set and change parameters;

Select, copy, paste copy, delete;

Mark as completed;

Forward to another person;

Attach a document;

Link to a contact.

While many OS architectural features are directly relevant only to system programmers, the concept of multiple application (operational) facilities is directly related to the needs of end users - the ability of the operating system to run applications written for other operating systems. This property of the operating system is called compatibility.

Application compatibility can be at the binary level and at the source level [ 13 ]. Applications are usually stored in the OS as executable files containing binary images of code and data. Binary compatibility is achieved when you can take an executable program and run it on a different OS environment.

Source-level compatibility requires the appropriate compiler to be included in the software of the computer on which the application is to be run, as well as compatibility at the level of libraries and system calls. This requires recompilation of the source code of the application into a new executable module.

Source-level compatibility is important mainly for application developers who have these sources at their disposal. But for end users, only binary compatibility is of practical importance, since only in this case they can use the same product on different operating systems and on different machines.

The type of possible compatibility depends on many factors. The most important of them is the architecture of the processor. If the processor uses the same instruction set (perhaps with additions, as in the case of the IBM PC: standard set + multimedia + graphics + streaming) and the same address range, then binary compatibility can be achieved quite simply. For this, the following conditions must be met:

    The API that the application uses must be supported by the given OS;

    the internal structure of the application's executable file must match the structure of the OS's executable files.

If the processors have different architectures, then, in addition to the above conditions, it is necessary to organize emulation of the binary code. For example, emulation of Intel processor instructions on the Motorola 680x0 processor of the Macintosh is widely used. The software emulator in this case sequentially selects the binary instruction of the Intel processor and executes the equivalent subroutine written in the instructions of the Motorola processor. Since the Motorola processor does not have exactly the same registers, flags, internal ALU, etc., as in Intel processors, it must also simulate (emulate) all these elements using its own registers or memory.

This is a simple but very slow operation, since a single Intel instruction is much faster than the Motorola instruction sequence that emulates it. The way out in such cases is the use of so-called application software environments or operating environments. One of the components of such an environment is the set of API functions that the OS exposes to its applications. To reduce the time for the execution of other people's programs, application environments imitate calls to library functions.

The effectiveness of this approach is due to the fact that most of today's programs run under GUI (graphical user interfaces) such as Windows, MAC or UNIX Motif, while applications spend 60-80% of the time doing GUI functions and other OS library calls. It is this property of applications that allows application environments to compensate for the large time spent on command-by-command emulation of programs. A carefully designed software application environment includes libraries that mimic GUI libraries, but written in "native" code. Thus, a significant acceleration of the execution of programs with the API of another operating system is achieved. Otherwise, this approach is called translation - in order to distinguish it from the slower process of emulating one instruction at a time.

For example, for a Windows program running on a Macintosh, when interpreting processor commands Intel Performance may be very low. But when a GUI function is called, a window is opened, etc., the OS module that implements the Windows application environment can intercept this call and redirect it to the window opening routine recompiled for the Motorola 680x0 processor. As a result, in such sections of the code, the speed of the program can reach (and, possibly, surpass) the speed of work on its own processor.

For a program written for one OS to run on another OS, it's not enough just to ensure API compatibility. The concepts underlying different operating systems may conflict with each other. For example, in one OS, an application may be allowed to control I / O devices, in another, these actions are the prerogative of the OS.

Each OS has its own resource protection mechanisms, its own error and exception handling algorithms, its own processor structure and memory management scheme, its own file access semantics, and its own graphical user interface. To ensure compatibility, it is necessary to organize a conflict-free coexistence within the same OS of several ways to manage computer resources.

There are various options for building multiple application environments, differing both in features of architectural solutions and in functionality that provide varying degrees of application portability. One of the most obvious options for implementing multiple application environments is based on the standard layered structure of the OS.

On the rice. 1.9 OS1 supports, in addition to its "native" applications, applications of operating systems OS2 and OS3. To do this, it contains special applications, application software environments that translate the interfaces of "foreign" operating systems API OS2 and API OS3 into the interface of their "native" OS - API OS1. So, for example, if OS2 was UNIX OS and OS1 was OS/2, to execute the fork() process creation system call in a UNIX application, the software environment must access the OS/2 operating system kernel with system by calling DOS ExecPgm().

Rice. 1.9. Organization of multiple application environments

Unfortunately, the behavior of almost all the functions that make up the API of one OS, as a rule, differs significantly from the behavior of the corresponding functions of another OS. For example, in order for the process creation function in OS / 2 Dos ExecPgm () to fully correspond to the process creation function fork () in UNIX-like systems, it would need to be changed and new functionality should be written: support for the ability to copy the address space of the parent process to the space of the child process [ 17 ].

Another way to build multiple application environments is based on the microkernel approach. At the same time, it is very important to note the basic, common for all application environments, difference between the mechanisms of the operating system and the high-level functions specific to each of the application environments that solve strategic problems. In accordance with the microkernel architecture, all OS functions are implemented by the microkernel and user-mode servers. It is important that the application environment is designed as a separate user-mode server and does not include the underlying mechanisms.

Applications, using the API, make system calls to the corresponding application environment through the microkernel. The application environment processes the request, executes it (perhaps by asking for help from the basic functions of the microkernel for this), and sends the result back to the application. During the execution of the request, the application environment, in turn, has to access the underlying OS mechanisms implemented by the microkernel and other OS servers.

This approach to designing multiple application environments has all the advantages and disadvantages of micro-kernel architecture, in particular:

    it is very easy to add and exclude application environments, which is a consequence of the good extensibility of micro-kernel operating systems;

    if one of the application environments fails, the rest remain operational, which contributes to the reliability and stability of the system as a whole;

    low performance of microkernel operating systems affects the speed of application tools, and hence the speed of applications.

As a result, it should be noted that the creation within the same OS of several application tools for executing applications of different OS is a way that allows you to have a single version of the program and transfer it between different operating systems. Multiple application environments ensure binary compatibility of a given OS with applications written for other operating systems.

Liked the article? Share with friends!
Was this article helpful?
Yes
Not
Thanks for your feedback!
Something went wrong and your vote was not counted.
Thank you. Your message has been sent
Did you find an error in the text?
Select it, click Ctrl+Enter and we'll fix it!