Hardware and software setup

The main technologies for implementing interfaces are: Batch. New information technology

"Transmission mechanism" - The result of the lesson. Technology 3 class. Training in the design of various technical models with a drive mechanism. Cross gear - when the wheels are spinning in different directions. Types of gears: 1 - belt; 2 - chain; 3 - gear. Products with gear: conveyor, crane, mill. The main part of the mill design is the transmission mechanism.

"Computer interfaces" - User interface. Software. Service programs. Personal computer as a system. provided by the operating system of the computer. Specify inputs and outputs. hardware interface. Hardware-software interface. Operating system. Text files. System programs. Hardware-software interface - the interaction of computer hardware and software.

"Technologies in the classroom" - Forms of organization can be different: lesson, group, individual, pair. Active and interactive methods are used by me from grades 5 to 11. Types of technologies: Technology of student-centered learning. Developmental learning technology. Technology of student-centered learning Project-research technology.

"Educational technologies at school" - Laboratory of unsolved problems. Methodological support for creative projects of educational institutions and teachers. Game technologies. Growth in the indicator of ICT use in the educational process. Dissemination of advanced pedagogical experience. Reducing the number of repeaters. The growth of the skills of teachers, the impact on the quality of the lesson.

"Technology 6 - 7 - 8 class" - How is electrical energy measured? What measurement determines the size of the shoulder product? What, according to popular ideas, meant the beginning of all life? Which part drives all the working parts of the sewing machine? Raw material for making a carriage for Cinderella. What is the function of the grooves on the needle blade?

"Sections of technology" - And we have from brilliant beads - Unusual beauty. Subject - Technology. Patchwork has long been known to many nations. National holidays and rituals, national clothes. They talk about the traditions of different peoples, national holidays and rituals. After baking donuts, cool slightly, rub with crushed garlic.

Interface types

An interface is, first of all, a set of rules. Like any rules, they can be generalized, collected into a "code", grouped according to a common feature. Thus, we came to the concept of "interface type" as a combination of the similarity of the ways of interaction between humans and computers. Briefly, we can propose the following schematic classification of various interfaces for communication between a person and a computer.

Modern types of interfaces are:

1) Command interface. The command interface is so called because in this type of interface a person gives "commands" to a computer, and the computer executes them and gives the result to a person. The command interface is implemented as batch technology and command line technology.

2) WIMP - interface (Window - window, Image - image, Menu - menu, Pointer - pointer). A characteristic feature of this type of interface is that the dialogue with the user is conducted not with the help of commands, but with the help of graphic images - menus, windows, and other elements. Although commands are given to the machine in this interface, this is done "directly", through graphic images. This kind of interface is implemented on two levels of technology: a simple graphical interface and a "pure" WIMP interface.

3) SILK - interface (Speech - speech, Image - image, Language - language, Knowlege - knowledge). This type of interface is closest to the usual, human form of communication. Within the framework of this interface, there is a normal "conversation" between a person and a computer. At the same time, the computer finds commands for itself by analyzing human speech and finding key phrases in it. It also converts the result of command execution into a human-readable form. This type of interface is the most demanding on the hardware resources of a computer, and therefore it is used mainly for military purposes.

Command interface

Packet technology. Historically, this type of technology appeared first. It already existed on the relay machines of Sues and Zuse (Germany, 1937). Its idea is simple: a sequence of characters is supplied to the computer input, in which, according to certain rules, the sequence of programs launched for execution is indicated. After the execution of the next program, the next one is launched, and so on. The machine, according to certain rules, finds commands and data for itself. This sequence can be, for example, a punched tape, a stack of punched cards, a sequence of pressing the keys of an electric typewriter (of the CONSUL type). The machine also issues its messages on a perforator, an alphanumeric printer (ATsPU), a typewriter tape. Such a machine is a "black box" (more precisely, a "white cabinet"), into which information is constantly fed and which also constantly "informs" the world about its state (see Figure 1). A person here has little influence on the operation of the machine - he can only stop the machine, change the program and start the computer again. Subsequently, when the machines became more powerful and could serve several users at once, the eternal expectation of users like: "I sent data to the machine. I'm waiting for it to answer. And will it answer at all?" - became, to put it mildly, annoying. In addition, computer centers, after newspapers, have become the second largest "producer" of waste paper. Therefore, with the advent of alphanumeric displays, the era of a truly user-friendly technology, the command line, began.

Fig.2. View of the main computer of the EC series of computers

command line technology. With this technology, the keyboard serves as the only way to enter information from a person to a computer, and the computer outputs information to a person using an alphanumeric display (monitor). This combination (monitor + keyboard) became known as a terminal, or console. Teams are recruited at command line. The command line is a prompt symbol and a blinking rectangle - the cursor. When a key is pressed, characters appear at the cursor position, and the cursor itself moves to the right. This is very similar to typing commands on a typewriter. However, unlike it, the letters are displayed on the display, not on paper, and a mistyped character can be erased. The command is terminated by pressing the Enter (or Return) key. After that, the transition to the beginning of the next line is performed. It is from this position that the computer displays the results of its work on the monitor. Then the process is repeated. Command line technology already worked on monochrome alphanumeric displays. Since only letters, numbers and punctuation marks were allowed to be entered, the technical characteristics of the display were not significant. A television receiver and even an oscilloscope tube could be used as a monitor.

Both of these technologies are implemented in the form of a command interface - commands are given to the machine as input, and it, as it were, "responds" to them.

The predominant type of files when working with command interface became text files - they and only they could be created using the keyboard. The time of the most widespread use of the command line interface is the advent of the UNIX operating system and the appearance of the first eight-bit personal computers with the multiplatform operating system CP / M.

GUI

How and when did the GUI appear? His idea originated in the mid-70s, when the concept of a visual interface was developed at the Xerox Palo Alto Research Center (PARC). The prerequisite for the graphical interface was to reduce the computer's reaction time to a command, increase the volume random access memory, as well as the development of the technical base of computers. The hardware basis of the concept, of course, was the appearance of alphanumeric displays on computers, and these displays already had such effects as "flickering" of characters, color inversion (reversing the style of white characters on a black background, that is, black characters on a white background ), underlining characters. These effects did not extend to the entire screen, but only to one or more characters. The next step was the creation of a color display that allows, along with these effects, symbols in 16 colors on a background with a palette (that is, a color set) of 8 colors. After the advent of graphic displays, with the ability to display any graphic images in the form of many dots on a screen of various colors, there were no limits to the imagination in using the screen! PARC's first GUI system, the 8010 Star Information System, thus appeared four months before the first IBM computer was released in 1981. Initially, the visual interface was used only in programs. Gradually, he began to move to the operating systems used first on Atari and Apple Macintosh computers, and then on IBM-compatible computers.

From an earlier time, and influenced also by these concepts, there has been a process of unification in the use of the keyboard and mouse by application programs. The merger of these two trends has led to the creation of the user interface, with the help of which, with minimal time and money spent on retraining staff, you can work with any software product. The description of this interface, common to all applications and operating systems, is the subject of this part.

Simple GUI

At the first stage, the graphical interface was very similar to command line technology. The differences from the command line technology were as follows:

1. When displaying symbols, it was allowed to highlight part of the symbols with color, inverse image, underline and blinking. Thanks to this, the expressiveness of the image has increased.

2. Depending on the specific implementation of the graphical interface, the cursor can be represented not only by a flickering rectangle, but also by some area covering several characters and even part of the screen. This selected area is different from other, unselected parts (usually by color).

3. Pressing the Enter key does not always execute the command and move to the next line. The response to pressing any key depends largely on which part of the screen the cursor was on.

4. In addition to the Enter key, the "gray" cursor keys are increasingly used on the keyboard.

5. Already in this edition of the graphical interface, manipulators (such as a mouse, trackball, etc. - see Fig. 3) began to be used. They made it possible to quickly select the desired part of the screen and move the cursor.

Fig.3. Manipulators

Summing up, the following distinctive features of this interface can be cited.

1) Selection of areas of the screen.

2) Redefining keyboard keys depending on the context.

3) Using manipulators and gray keyboard keys to control the cursor.

4) Widespread use of color monitors.

The appearance of this type of interface coincides with the widespread use of the MS-DOS operating system. It was she who introduced this interface to the masses, thanks to which the 80s were marked by the improvement of this type of interface, the improvement of character display characteristics and other monitor parameters.

A typical example of using this kind of interface is the Nortron Commander file shell (see below for file shells) and the Multi-Edit text editor. BUT text editors Lexicon, ChiWriter and word processor Microsoft Word for Dos is an example of how this interface has outdone itself.

WIMP interface

The "pure" WIMP interface became the second stage in the development of the graphical interface. This subspecies of the interface is characterized by the following features.

1. All work with programs, files and documents takes place in windows - certain parts of the screen outlined by a frame.

2. All programs, files, documents, devices and other objects are represented as icons - icons. When opened, the icons turn into windows.

3. All actions with objects are carried out using the menu. Although the menu appeared at the first stage of the development of the graphical interface, it did not have a dominant meaning in it, but served only as an addition to the command line. In a pure WIMP interface, the menu becomes the main control element.

4. Widespread use of manipulators to point to objects. The manipulator ceases to be just a toy - an addition to the keyboard, but becomes the main control element. With the help of the manipulator, they POINT to any area of ​​the screen, windows or icons, HIGHLIGHT it, and only then through the menu or using other technologies they control them.

It should be noted that WIMP requires a high-resolution color raster display and a manipulator for its implementation. Also, programs focused on this type of interface impose increased requirements on computer performance, the amount of memory, bandwidth tires, etc. However, this type of interface is the easiest to learn and most intuitive. Therefore, now WIMP - the interface has become the de facto standard.

A striking example of programs with a graphical interface is the Microsoft Windows operating system.

Speech technology

Since the mid-90s, after the advent of inexpensive sound cards and the widespread use of speech recognition technologies, the so-called "speech technology" SILK - interface appeared. With this technology, commands are given by voice by pronouncing special reserved words - commands. The main such teams (according to the rules of the Gorynych system) are:

"Rest" - turn off the speech interface.

"Open" - switching to the mode of calling a particular program. The name of the program is called in the next word.

"I will dictate" - the transition from the mode of commands to the mode of typing by voice.

"Command mode" - return to voice commands.

And some others.

Words should be pronounced clearly, at the same pace. There is a pause between words. Due to the underdevelopment of the speech recognition algorithm, such systems require individual presetting for each specific user.

The "speech" technology is the simplest implementation of the SILK interface.

  1. Informational systems in economics (30)

    Abstract >> Economics

    ... information 6 1.3. Classification information technologies 9 1.5. Stages of development information systems... aggregate hardware funds... informational technology. An example of such a criterion is custom interface ... 3.5. Software facilities...

  2. Informational technologies in management (5)

    Abstract >> State and Law

    11 2.1 Software providing 15... information technologies. Concept " informational technologies" can be defined as a set of software hardware funds and systems... creation and support custom interfaces for different categories...

  3. Informational control technologies (10)

    Lecture >> Informatics

    Type custom interface automated informational technologies share... offices organize specialized programmatically-hardware complex is... system videoconferencing, email etc.); towards globalization information technology...

Whenever you turn on your computer, you are dealing with user interface(User Interface, UI), which seems simple and obvious, but to make it so, a lot of work has been invested in the industry. Let's look back at the 1990s, when desktops became ubiquitous, and give a timeline of how UI technologies evolved. Consider also how UI programming tools have evolved and what they are today. In table. 1 shows a list of the main tasks of UI development, on the basis of which the analysis of various technologies for implementing user interfaces, divided into categories, was carried out. Each of these categories includes technologies that solve one or more problems in roughly the same way.

DBMS-bound input forms

One of the main categories of UI development tools is formed by tools focused on data entry forms with reference to relational DBMS. The essence of this approach is to create a UI for applications by building forms that display the values ​​of the base fields in the corresponding controls: text fields, lists, checkboxes, tables, etc. The toolkit allows you to navigate through such a form and establish a direct connection between the elements management and data in the database. The developer does not need to worry about locks, transferring, transforming and updating data - when the user, for example, switches the record number in the form, its other fields are updated automatically. Similarly, if the user changes the value in a field associated with any record from the database, this change is instantly saved in it. To achieve this, you do not need to write special code - just declare the binding of the control or the entire form to the data source. Thus, the support for data binding in tools in this category is one of the strengths this method. The tasks of UI layout and styling in such environments are solved with the help of form designers and specialized object-oriented APIs. Event handlers (which are methods implemented in the main programming language of the development environment) are usually provided to control the behavior of the UI, while expressions (including regular expressions) are used to control input values. Typical representatives of this numerous category of tools are Microsoft Access and Oracle Forms.

Template Processors

Technologies for building user interfaces based on templates implemented in markup languages ​​have been widely used since the mid-1990s. The main advantages of templates are the flexibility and breadth of possibilities for creating dynamic web user interfaces, especially in terms of designing structure and layout. Initially, these toolkits used templates, in which the layout and structure of the UI was specified using a markup language, and data binding was done using small blocks in a high-level language (Java, C#, PHP, Python, etc.). The latter could be used in combination with markup; for example, by injecting markup tags into a loop, Java could create iterative visuals like tables and lists. The need to frequently change syntax within a web page made it difficult for programmers to develop and correct code, so about a decade ago, a shift began from high-level languages ​​to specialized markup tag libraries and expression languages ​​created for specific web technologies.

Markup tags began to be used to implement typical functions of web applications, and expressions were used to access data and call functions stored in server objects. A typical representative of this group is the JavaServer Pages (JSP) technology, whose tag library JSP Standard Tag Library supports tasks such as XML document manipulation, loops, conditions, DBMS query (data binding) and internationalization (data formatting). The JSP Expression Language - EL, which serves as a data binding tool, offers a convenient notation for working with objects and application properties.

Exists whole line JSP-like web development tools: for planning and setting the structure (they use templates), for binding to data using an expression language, and UI behavior is set using event handlers implemented by means of the ECMAScript language and the Document Object Model programming interface. Data formatting is performed using specialized tag libraries; CSS (Cascading Style Sheets) is usually used to style the appearance. Popular representatives of this category of tools are ASP, PHP, Struts, WebWork, Struts2, Spring MVC, Spyce and Ruby on Rails.

Object Oriented and Event Tools

A significant proportion of the tools for creating UI is based on an object-oriented model. Typically, these toolkits offer a library of pre-built UI elements, and their main advantage is the ease of building reusable blocks from simple components and an intuitive, flexible behavior and interaction programming process based on event handlers. In these toolkits, all UI development tasks are solved using specialized object APIs. This category includes environments: Visual Basic, MFC, AWT, Swing, SWT, Delphi, Google Web Toolkit, Cocoa Touch UIKit, Vaadin, and others. This also includes the Nokia Qt toolkit, which offers a number of original concepts. In some toolkits, all the complexity of interaction between elements of the UI structure is implemented using event handlers, and in Qt, in addition to them, there are “signals” and “slots”: a signal is transmitted by the UI component whenever a certain event occurs. A slot is a method called in response to a specific signal that can be declaratively associated with any number of slots, and vice versa, one slot can receive as many signals as you like. The element that transmits the signal "does not know" which slot will receive it. Thus, user interface elements are loosely coupled by signal-slot connections. This mechanism promotes the use of the principle of encapsulation and provides the ability to declaratively set the behavior of the UI.

hybrids

Hybrid technologies are relatively new in the world of general-purpose UI development - along with templates and expression languages, such toolkits use an object API. A typical representative is JavaServer Faces: tag libraries are used to describe the structure and layout, as well as to format data; expression language - for binding elements and events to server objects and application code; object API - for displaying elements, managing their state, handling events and controlling input. Other popular toolkits in this category include ASP.NET MVC, Apache Wicket, Apache Tapestry, Apache Click, and ZK Framework.

Adobe Flex is conceptually close to technologies in this category, as it uses templates for structuring and layout, and programming is done entirely in ActionScript. Like Qt, the Flex framework provides a mechanism for solving problems related to behavior programming and data binding.

Declarative toolkits

Such tools are the newest trend in the field of UI development tools. They use XML and JSON (JavaScript Object Notation) based languages ​​to specify the structure of the user interface, while declarative notation is predominantly used for other UI development tasks. Unlike hybrid approaches, which are mainly designed for web interfaces, declarative approaches are also used in the development of native applications for mobile and desktop platforms.

custom API android interface- event-dependent, object-oriented, but along with the main one, the OS has an auxiliary XML-based API that allows you to declare the structure and layout of the user interface, as well as style its elements and manage their properties. The declarative description of the interface shows its structure more clearly and helps in debugging; allows you to change the layout without recompilation; helps adapt to different platforms, screen sizes and aspect ratios. When creating more dynamic user interfaces, you can also programmatically specify and change the structure of elements using object APIs, but data binding is not supported. There is, however, Android-Binding - a 3rd party solution with open source A that allows you to bind UI elements to data models.

Create UI for Windows programs and functionally rich Internet applications based, respectively, on Windows Platform Foundation and Microsoft Silverlight technologies, can be done using another XML vocabulary - eXtensible Application Markup Language (XAML). It allows you to set the structure, layout, and style of the UI, and, unlike the Android markup language, it supports data binding and the ability to handle events.

Nokia recommends Qt Quick, a cross-platform toolkit for desktop, mobile and embedded operating systems that supports QML (declarative scripting language based on JSON syntax) for developers. The description of the user interface is hierarchical structure, and the behavior is programmed in ECMAScript. Here, as in normal Qt, the signal-slot mechanism is supported. Qt Quick supports the ability to bind properties of UI elements to a data model, as well as the concept of a state machine that allows you to graphically model the behavior of an interface.

Another example is Enyo, a cross-platform ECMAScript UI toolkit in which the interface structure is declarative and behavior is controlled by event handlers. Events are processed in three ways: at the level of individual UI components, by passing from child to parent without direct binding, and also by broadcasting and subscribing to such messages (also without direct binding). Loose coupling of UI elements enhances the reusability and encapsulation of large interface fragments. Essentially, Enyo's main strength is its encapsulation model, which allows the UI to be composed of reusable, self-contained building blocks with defined interfaces. This model promotes abstraction and covers all architectural levels of the UI. Members of the Enyo project are working on implementing support for data binding.

The Eclipse XML Window Toolkit is another toolkit focused on declarative UI description. The original goal of its creation was to bring together in Eclipse all the UI development tools, including SWT, JFace, Eclipse Forms and others - all of their elements in one way or another have correspondences in XWT. The structure and layout of the UI in XWT is specified using an XML-based language, and an expression language is used for data binding (accessing the application's Java objects). Event handling is programmed in Java, and CSS is used to style the interface elements. The XWT application execution engine is implemented as a Java applet and ActiveX control, that is, it can work in almost any browser.

There are many similar tools in this category: AmpleSDK, for example, uses XUL as the UI description language, ECMAScript functions for programming dynamic behavior, CSS for styling. The Dojo Toolkit defines an interface declaratively and provides a wide range of predefined elements, object storage for data access, and an ECMAScript-based event handler with a publish-subscribe mechanism. The toolkit supports internationalization, advanced data interrogation API, modularization, and multiple class inheritance.

Model-Based Toolkits

A significant part of UI development technologies is based on models and domain-specific languages. Basically, these are interface models, but domain models can also be used. In both cases, the model is needed to generate the user interface in advance or is interpreted at runtime. This class of technologies raises the level of abstraction, provides improved systematic methods for designing and implementing user interfaces, and provides a framework for automating related tasks. However, according to some researchers, model-based technologies do not provide universal way integration of the user interface with the application, and there is still no agreement on which set of models is best suited to describe the UI. The task of data binding has not been solved, and models have not been combined to solve other UI development tasks.

Analyzing the generations of Model-Based approaches to UI development since the 1990s, it can be concluded that today there is a generally accepted understanding of the levels of abstraction and types of models suitable for the development of modern user interfaces, but there is still no consensus (standards) regarding information (semantics) that should contain various models. Models of tasks, dialogs and presentations can be considered basic: the presentation model solves the problems of structuring, planning and styling; the task model is responsible for binding to data - for each task, UI and logic objects are specified with which to work; the dialogue model covers behavioral aspects. An example of a task model is Concurrent-TaskTrees (CTT), which can be used in conjunction with the MARIA language, which implements the rest of the UI models. CTT combined with MARIA is a complete model-based toolkit. A fairly large family of UI modeling tools also relies on UML language, entity-relationship models, or the like. UML profiles are widely used in building user interfaces for business applications. There are other actively used toolkits - for example, WebRatio, UMLi, Intellium Virtual Enterprise and SOLoist.

Generic user interfaces

A small but significant subset of UI technologies generate UI based on user, data, task, or other kinds of application models. The interface is generated based on the entire model or semi-automatically. Models can also be interpreted at runtime without being used as the basis for interface generation. In any case, due to the high level of UI building automation, technologies of this category save the developer's time and reduce the number of errors, and the generated interfaces have a uniform structure. However, generic UIs are not flexible, have limited functionality and unpredictable generation process. Nevertheless, with a direct connection to the domain model, the development of applications with generic UIs is quite possible. There are about a dozen examples in this category, led by the widely used Naked Objects architectural pattern. Automatic UI generation can be successfully applied in certain subject areas - for example, in the design of dialog boxes and user interfaces for remote control systems. Researchers see further development of this class of technologies in the improvement of modeling techniques and the search for new ways of combining models in order to improve the usability of the generated UI.

Trends and Challenges

The figure shows the chronology of the appearance of various UI development tools, their distribution by category and main areas of application, and in Table. 2 shows the ways in which each of the technologies solves various tasks UI development.

Web development for the development of commonly used technologies is characterized by two opposite trends. After template-based technologies, toolkits with object-oriented APIs appeared, which were most often supplemented by templates (in the case of hybrid approaches) or completely replaced them (GWT and Vaadin). In principle, this is quite logical, given the general superiority of object-oriented languages ​​over template languages ​​(inheritance, polymorphism, encapsulation, parameterization, reusability, etc.), the need for advanced concepts and mechanisms for compiling extensive UI structures, as well as the “history success" of object-oriented APIs in the desktop era.

It is noteworthy that in comparison with the imperative and object-oriented methods of forming the UI, declarative ones have become more widely used today - for example, HTML, XML, XPath, CSS, JSON and similar notations are becoming commonly used. Much of the structure of a UI is typically static, so declarative notations are great for structuring, layout, and data binding. But the behavioral aspects of the UI are still implemented according to the classic event-driven paradigm, although there are exceptions - when declarative means are used.

A noticeable trend in the development of UI is a focus on standard technologies and platforms. XML and ECMAScript are now more popular than ever, although specialized technologies, especially model-based ones, are actively competing for living space with high technical standards.

There are several challenges waiting to be solved by the vendors of development tools and the layered architectures required to define them. User interfaces for large-scale business applications are often hundreds of pages or more, in which case a clear overview is absolutely essential. system architecture. There is a new modeling technique that solves this problem by introducing the concept of a capsule that provides strong encapsulation of UI fragments and allows architecture to be specified at different levels of detail. The capsule already has internal structure, which can be consistently applied recursively at all lower levels of UI components. Enyo and WebML developers are trying to solve a similar problem.

Flexibility, extensibility and breadth of support tools- the real benefits of commonly used UI development technologies, but so far they suffer from a rather low level of abstraction and lack of expressiveness. On the other hand, model-based approaches should avoid inheriting semantics from low-level UI models, in otherwise abstract models of user interfaces can become as complex as the implementation of the UI itself. Instead of using knowledge from subject area UI and application model semantics, UI designers are still required to work directly with low-level components: dialog boxes, menus, and event handlers.

UI development technologies have another serious problem associated with the adaptation requirements for many target platforms, which are typical for all modern interactive applications. Fortunately, the model-oriented community reacted in time - in 2003, a unifying universal architecture was proposed for the processes, models, and methods used in building multiplatform UIs.

The current variety of computing devices and platforms is somewhat reminiscent of the desktop era of the late 90s, with its abundance of toolkits for building user interfaces offered by various vendors. To date, HTML5 has not yet solved the problem of technological discord due to limited support for hardware features and programming interfaces. Ultimately, as is the case with many software engineering problems, UI development today requires clear and simple solutions, however, requiring an incredible amount of implementation effort from their creators.

Literature

  1. P.P. Da Silva. User Interface Declarative Models and Development Environments: A Survey. Proc. Interactive Systems: Design, Specification, and Verification, Springer, 2000, pp. 207-226.
  2. G. Meixner, F. Paterno, J. Vanderdonckt. Past, Present, and Future of Model-Based User Interface Development // i-com. 2011.vol. 10, N3, R. 2-11.
  3. G. Mori, F. Paterno, C. Santoro. CTTE: Support for Developing and Analyzing Task Models for Interactive Systems Design // IEEE Trans. SoftwareEng. 2002, vol. 28, N8, P. 797-813.

Zharko Miyailovic([email protected]) - senior engineer, Dragan Milichev([email protected]) - Associate Professor, University of Belgrade.

Zarko Mijailovic, Dragan Milicev, A Retrospective on User Interface Development Technology, IEEE Software, November/December 2013, IEEE Computer Society. All rights reserved. Reprinted with permission.

1. THE CONCEPT OF THE USER INTERFACE

Interface - a set of technical, software and methodological (protocols, rules, agreements) means of interface in the computing system of users with devices and programs, as well as devices with other devices and programs.

Interface - in the broad sense of the word, it is a way (standard) of interaction between objects. The interface in the technical sense of the word defines the parameters, procedures and characteristics of the interaction of objects. Distinguish:

User interface - a set of methods of interaction between a computer program and the user of this program.

Programming interface - a set of methods for interaction between programs.

Physical interface - way of interacting physical devices. Most often we are talking about computer ports.

The user interface is a combination of software and hardware that provides user interaction with a computer. Dialogues form the basis of such interaction. In this case, a dialogue is understood as a regulated exchange of information between a person and a computer, carried out in real time and aimed at jointly solving a specific problem. Each dialog consists of separate input/output processes that physically provide communication between the user and the computer. The exchange of information is carried out by the transmission of a message.

Figure 1. User interaction with the computer

Basically, the user generates messages of the following types:

information request

help request

operation or function request

entering or changing information

In response, the user receives hints or help; informational messages requiring a response; orders requiring action; error messages and other information.

The user interface of the computer application includes:

means of displaying information, displayed information, formats and codes;

command modes, language "user - interface";

dialogues, interaction and transactions between the user and the computer, user feedback;

decision support in a specific subject area;

how to use the program and documentation for it.

The user interface (UI) is often understood only as appearance programs. However, in reality, the user perceives through it the entire program as a whole, which means that such an understanding is too narrow. In fact, the UI combines all the elements and components of the program that are capable of influencing the user's interaction with the software (SW).

It's not just the screen the user sees. These elements include:

a set of user tasks that he solves with the help of the system;

the metaphor used by the system (for example, the desktop in MS Windows®);

system controls;

navigation between system blocks;

visual (and not only) design of program screens;

means of displaying information, displayed information and formats;

data entry devices and technologies;

dialogues, interactions and transactions between the user and the computer;

user feedback;

decision support in a specific subject area;

how to use the program and documentation for it.

2. TYPES OF INTERFACES

An interface is, first of all, a set of rules. Like any rules, they can be generalized, collected into a "code", grouped according to a common feature. Thus, we came to the concept of "interface type" as a combination of the similarity of the ways of interaction between humans and computers. Briefly, we can propose the following schematic classification of various interfaces for communication between a person and a computer.

Modern types of interfaces are:

1) Command interface. The command interface is so called because in this type of interface a person gives "commands" to a computer, and the computer executes them and gives the result to a person. The command interface is implemented as batch technology and command line technology.

2) WIMP - interface (Window - window, Image - image, Menu - menu, Pointer - pointer). A characteristic feature of this type of interface is that the dialogue with the user is conducted not with the help of commands, but with the help of graphic images - menus, windows, and other elements. Although commands are given to the machine in this interface, this is done "directly", through graphic images. This kind of interface is implemented on two levels of technology: a simple graphical interface and a "pure" WIMP interface.

3) SILK - interface (Speech - speech, Image - image, Language - language, Knowlege - knowledge). This type of interface is closest to the usual, human form of communication. Within the framework of this interface, there is a normal "conversation" between a person and a computer. At the same time, the computer finds commands for itself by analyzing human speech and finding in it key phrases. It also converts the result of command execution into a human-readable form. This type of interface is the most demanding on the hardware resources of a computer, and therefore it is used mainly for military purposes.

2.1 Command interface

Packet technology. Historically, this type of technology appeared first. It already existed on the relay machines of Sues and Zuse (Germany, 1937). Its idea is simple: a sequence of characters is supplied to the computer input, in which, according to certain rules, the sequence of programs launched for execution is indicated. After the execution of the next program, the next one is launched, and so on. The machine, according to certain rules, finds commands and data for itself. This sequence can be, for example, a punched tape, a stack of punched cards, a sequence of pressing the keys of an electric typewriter (of the CONSUL type). The machine also issues its messages on a perforator, an alphanumeric printer (ATsPU), a typewriter tape. Such a machine is a "black box" (more precisely, a "white cabinet"), into which information is constantly fed and which also constantly "informs" the world about its state (see Figure 1). A person here has little influence on the operation of the machine - he can only stop the machine, change the program and start the computer again. Subsequently, when the machines became more powerful and could serve several users at once, the eternal expectation of users like: "I sent data to the machine. I'm waiting for it to answer. And will it answer at all?" - became, to put it mildly, annoying. In addition, computer centers, after newspapers, have become the second largest "producer" of waste paper. Therefore, with the advent of alphanumeric displays, the era of a truly user-friendly technology, the command line, began.

Fig.2. View of the main computer of the EC series of computers

command line technology. With this technology, the keyboard serves as the only way to enter information from a person to a computer, and the computer outputs information to a person using an alphanumeric display (monitor). This combination (monitor + keyboard) became known as a terminal, or console. Commands are typed on the command line. The command line is a prompt symbol and a blinking rectangle - the cursor. When a key is pressed, characters appear at the cursor position, and the cursor itself moves to the right. This is very similar to typing commands on a typewriter. However, unlike it, the letters are displayed on the display, not on paper, and a mistyped character can be erased. The command is terminated by pressing the Enter (or Return) key. After that, the transition to the beginning of the next line is performed. It is from this position that the computer displays the results of its work on the monitor. Then the process is repeated. Command line technology already worked on monochrome alphanumeric displays. Since only letters, numbers and punctuation marks were allowed to be entered, the technical characteristics of the display were not significant. A television receiver and even an oscilloscope tube could be used as a monitor.

Both of these technologies are implemented in the form of a command interface - commands are given to the machine as input, and it, as it were, "responds" to them.

The predominant type of files when working with the command interface are text files- they and only they could be created using the keyboard. At the time of the most widespread use of the command line interface, the appearance of the UNIX operating system and the appearance of the first eight-bit personal computers with multi-platform operating system CP/M.

2.2 GUI

How and when did the GUI appear? His idea originated in the mid-70s, when the concept of a visual interface was developed at the Xerox Palo Alto Research Center (PARC). The prerequisite for the graphical interface was to reduce the reaction time of the computer to the command, increase the amount of RAM, as well as the development of the technical base of computers. The hardware basis of the concept, of course, was the appearance of alphanumeric displays on computers, and these displays already had such effects as "flickering" of characters, color inversion (reversing the style of white characters on a black background, that is, black characters on a white background ), underlining characters. These effects did not extend to the entire screen, but only to one or more characters. The next step was the creation of a color display that allows, along with these effects, symbols in 16 colors on a background with a palette (that is, a color set) of 8 colors. After the advent of graphic displays, with the ability to display any graphic images in the form of many dots on a screen of various colors, there were no limits to the imagination in using the screen! The first system with GUI PARC's 8010 Star Information System thus appeared four months before the first IBM computer was released in 1981. Initially, the visual interface was used only in programs. Gradually, he began to move to the operating systems used first on Atari and Apple Macintosh computers, and then on IBM-compatible computers.

From an earlier time, and influenced also by these concepts, there has been a process of unification in the use of the keyboard and mouse by application programs. The merger of these two trends has led to the creation of the user interface, with the help of which, with minimal time and money spent on retraining staff, you can work with any software product. The description of this interface, common to all applications and operating systems, is the subject of this part.

2.2.1 Simple GUI

At the first stage, the graphical interface was very similar to command line technology. The differences from the command line technology were as follows:

1. When displaying symbols, it was allowed to highlight part of the symbols with color, inverse image, underline and blinking. Thanks to this, the expressiveness of the image has increased.

2. Depending on the specific implementation of the graphical interface, the cursor can be represented not only by a flickering rectangle, but also by some area covering several characters and even part of the screen. This selected area is different from other, unselected parts (usually by color).

3. Pressing the Enter key does not always execute the command and move to the next line. The response to pressing any key depends largely on which part of the screen the cursor was on.

4. In addition to the Enter key, the "gray" cursor keys are increasingly used on the keyboard.

5. Already in this edition of the graphical interface, manipulators (such as a mouse, trackball, etc. - see Fig. 3) began to be used. They made it possible to quickly select the desired part of the screen and move the cursor.

Fig.3. Manipulators

Summing up, the following distinctive features of this interface can be cited.

1) Selection of areas of the screen.

2) Redefining keyboard keys depending on the context.

3) Using manipulators and gray keyboard keys to control the cursor.

4) Widespread use of color monitors.

The appearance of this type of interface coincides with the widespread use of the MS-DOS operating system. It was she who introduced this interface to the masses, thanks to which the 80s were marked by the improvement of this type of interface, the improvement of character display characteristics and other monitor parameters.

A typical example of using this kind of interface is the Nortron Commander file shell (see below for file shells) and the Multi-Edit text editor. And the text editors Lexicon, ChiWriter, and the Microsoft Word for Dos word processor are examples of how this interface has outdone itself.

2.2.2 WIMP interface

The "pure" WIMP interface became the second stage in the development of the graphical interface. This subspecies of the interface is characterized by the following features.

1. All work with programs, files and documents takes place in windows - certain parts of the screen outlined by a frame.

2. All programs, files, documents, devices and other objects are represented as icons - icons. When opened, the icons turn into windows.

3. All actions with objects are carried out using the menu. Although the menu appeared at the first stage of the development of the graphical interface, it did not have a dominant meaning in it, but served only as an addition to the command line. In a pure WIMP interface, the menu becomes the main control element.

4. Widespread use of manipulators to point to objects. The manipulator ceases to be just a toy - an addition to the keyboard, but becomes the main control element. With the help of the manipulator, they POINT to any area of ​​the screen, windows or icons, HIGHLIGHT it, and only then through the menu or using other technologies they control them.

It should be noted that WIMP requires a high-resolution color raster display and a manipulator for its implementation. Also, programs oriented to this type of interface impose increased requirements on computer performance, memory size, bus bandwidth, etc. However, this type of interface is the easiest to learn and most intuitive. Therefore, now WIMP - the interface has become the de facto standard.

A striking example of programs with a graphical interface is the Microsoft Windows operating system.

2.3 Speech technology

Since the mid-90s, after the appearance of inexpensive sound cards and the widespread use of speech recognition technologies, the so-called "speech technology" of the SILK interface has appeared. With this technology, commands are given by voice by pronouncing special reserved words - commands. The main such teams (according to the rules of the Gorynych system) are:

"Rest" - turn off the speech interface.

"Open" - switching to the mode of calling a particular program. The name of the program is called in the next word.

"I will dictate" - the transition from the mode of commands to the mode of typing by voice.

"Command mode" - return to voice commands.

and some others.

Words should be pronounced clearly, at the same pace. There is a pause between words. Due to the underdevelopment of the speech recognition algorithm, such systems require individual pre-configuration for each specific user.

The "speech" technology is the simplest implementation of the SILK interface.

2.4 Biometric technology

This technology originated in the late 1990s and is still being developed at the time of this writing. To control the computer, a person's facial expression, the direction of his gaze, the size of the pupil, and other signs are used. To identify the user, the pattern of the iris of his eyes, fingerprints and other unique information is used. Images are read from a digital video camera, and then using special programs pattern recognition commands are extracted from this image. This technology is likely to take its place in software products and applications where it is important to accurately identify a computer user.

2.5 Semantic (public) interface

This type of interface arose in the late 70s of the XX century, with the development artificial intelligence. It can hardly be called an independent type of interface - it includes a command line interface, and a graphical, speech, and mimic interface. Its main distinguishing feature is the absence of commands when communicating with a computer. The request is formed in natural language, in the form of associated text and images. At its core, it is difficult to call it an interface - it is already a simulation of "communication" between a person and a computer. Since the mid-1990s, there have been no publications related to the semantic interface. It seems that due to the important military significance of these developments (for example, for the autonomous conduct of modern combat by machines - robots, for "semantic" cryptography), these areas were classified. Information that these studies are ongoing occasionally appears in periodicals (usually in computer news sections).

2.6 Interface types

There are two types of user interfaces:

1) procedurally oriented:

Primitive

With free navigation

2) object-oriented:

direct manipulation.

A procedural-oriented interface uses the traditional user interaction model based on the concepts of "procedure" and "operation". Within this model, the software provides the user with the ability to perform some actions for which the user determines the conformity of the data and the consequence of which is to obtain the desired result.

Object-oriented interfaces use a user interaction model focused on manipulating domain objects. Within this model, the user is given the opportunity to directly interact with each object and initiate the execution of operations during which several objects interact. The user's task is formulated as a purposeful change of some object. An object is understood in the broad sense of the word - a model of a database, system, etc. An object-oriented interface assumes that user interaction is carried out by selecting and moving icons of the corresponding object-oriented area. There are single document (SDI) and multiple document (MDI) interfaces.

Procedurally oriented interfaces:

1) Provide the user with the functions necessary to complete tasks;

2) The emphasis is on tasks;

3) Icons represent applications, windows or operations;

Object Oriented Interfaces:

1) Provides the user with the ability to interact with objects;

2) Emphasis is placed on inputs and results;

3) Pictograms represent objects;

4) Folders and directories are visual containers of objects.

A primitive is an interface that organizes interaction with the user and is used in console mode. The only deviation from the sequential process that is provided by data is the organization of a cycle for processing several sets of data.

Interface Menu. Unlike the primitive interface, it allows the user to select an operation from a special list displayed to him by the program. These interfaces involve the implementation of many work scenarios, the sequence of actions in which is determined by users. The tree-like organization of the menu implies a strictly limited implementation. In this case, there are two options for organizing the menu:

each menu window takes up the entire screen

there are several multi-level menus on the screen at the same time (Windows).

In conditions of limited navigation, regardless of the implementation, finding an item of more than two level menus turns out to be quite a challenge.

Free navigation interface (GUI). Supports the concept of interactive interaction with the software, visual feedback with the user and the ability to directly manipulate the object (buttons, indicators, status bars). Unlike the Menu interface, the free-navigation interface provides the ability to perform any operations valid in a particular state, which can be accessed through various interface components ("hot" keys, etc.). The freely navigable interface is implemented using event programming, which involves the use of visual development tools (through messages).


| | | | | | | | | 10 | |

Like any technical device, a computer exchanges information with a person through a set of certain rules that are mandatory for both the machine and the person. These rules are called interfaces in computer literature. The interface should be clear and incomprehensible, friendly and not. Many adjectives go with it. But in one he is constant: he is, and you can’t get away from him anywhere.

Interface- these are the rules for the interaction of the operating system with users, as well as neighboring levels in the computer network. The technology of communication between a person and a computer depends on the interface.

Interface is, first of all, a set of rules. Like any rules, they can be generalized, collected into a "code", grouped according to a common feature. Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, we have come to the concept of "interface type" as a combination of similar ways of interaction between humans and computers. We can propose the following schematic classification of various interfaces for communication between a person and a computer (Fig. 1.).

Packet technology. Historically this species technology came first. It already existed on the relay machines of Sues and Zuse (Germany, 1937). Its idea is simple: a sequence of characters is supplied to the computer input, in which, according to certain rules, the sequence of programs launched for execution is indicated. After the execution of the next program, the next one is launched, and so on. The machine, according to certain rules, finds commands and data for itself. This sequence can be, for example, a punched tape, a stack of punched cards, a sequence of pressing the keys of an electric typewriter (such as CONSUL). The machine also issues its messages on a perforator, an alphanumeric printer (ATsPU), a typewriter tape.

Such a machine is a "black box" (more precisely, a "white cabinet"), into which information is constantly fed and which also constantly "informs" the world about its state. A person here has little influence on the operation of the machine - he can only suspend the operation of the machine, change the program and start the computer again. Subsequently, when the machines became more powerful and could serve several users at once, the eternal expectation of users like: "I sent data to the machine. I'm waiting for it to answer. And will it answer at all?" - it became, to put it mildly, necessary to eat. In addition, computer centers, following newspapers, have become the second largest "producer" of waste paper. For this reason, with the advent of alphanumeric displays, the era of a truly user-friendly technology, the command line, began.

command interface.

The command interface is usually called so because in this type of interface a person gives "commands" to the computer, and the computer executes them and gives the result to the person. The command interface is implemented as batch technology and command line technology.

With this technology, the keyboard serves as the only way to enter information from a person to a computer, and the computer outputs information to a person using an alphanumeric display (monitor). This combination (monitor + keyboard) became known as a terminal, or console.

Commands are typed on the command line. The command line is a prompt symbol and a blinking rectangle - the cursor.
Hosted on ref.rf
When a key is pressed, characters appear at the cursor position, and the cursor itself moves to the right. The command is ended by pressing the Enter (or Return.) key. After that, the transition to the beginning of the next line is performed. It is from this position that the computer displays the results of its work on the monitor. Then the process is repeated.

Command line technology already worked on monochrome alphanumeric displays. Since only letters, numbers and punctuation marks were allowed to be entered, the technical characteristics of the display were not significant. A television receiver and even an oscilloscope tube could be used as a monitor.

Both of these technologies are implemented in the form of a command interface - the machine is fed into the input of the command, and it, as it were, "responds" to them.

Text files became the predominant type of files when working with the command interface - they and only they could be created using the keyboard. The most widespread use of the command line interface is the emergence of the UNIX operating system and the appearance of the first eight-bit personal computers with the multiplatform operating system CP/M.

WIMP interface(Window - window, Image - image, Menu - menu, Pointer - pointer). A characteristic feature of this type of interface is that the dialogue with the user is conducted not with the help of commands, but with the help of graphic images - menus, windows, and other elements. Although machine commands are given in this interface, this is done "directly", through graphic images. The idea of ​​a graphical interface originated in the mid-70s, when the concept of a visual interface was developed at the Xerox Palo Alto Research Center (PARC). The prerequisite for the graphical interface was a decrease in the computer's response time to a command, an increase in the amount of RAM, as well as the development of the technical base of computers. The hardware basis of the concept, of course, was the appearance of alphanumeric displays on computers, and these displays already had such effects as "flickering" of characters, color inversion (reversing the style of white characters on a black background, that is, black characters on a white background ), underlining characters. These effects did not extend to the entire screen, but only to one or more characters. The next step was the creation of a color display that allows, along with these effects, symbols in 16 colors on a background with a palette (that is, a color set) of 8 colors. After the advent of graphic displays, with the ability to display any graphic images in the form of many dots on a screen of various colors, there are no limits to the imagination in using the screen! PARC's first GUI system, the 8010 Star Information System, thus appeared four months before the first IBM computer was released in 1981. Initially, the visual interface was used only in programs. Gradually, he began to move to the operating systems used first on Atari and Apple Macintosh computers, and then on IBM compatible computers.

From an earlier time, and influenced also by these concepts, there has been a process of unification in the use of the keyboard and mouse by application programs. The merger of these two trends has led to the creation of the user interface, with the help of which, with minimal time and money spent on retraining staff, you can work with any software product. The description of this interface, common to all applications and operating systems, is the subject of this part.

The graphical user interface during its development has gone through two stages and is implemented at two levels of technology: a simple graphical interface and a "pure" WIMP interface.

At the first stage, the graphical interface was very similar to command line technology. The differences from the command line technology were as follows:

Ú When displaying characters, it was allowed to highlight some of the characters with color, inverted image, underline and blinking. Thanks to this, the expressiveness of the image has increased.

Ú Given the dependence on a specific implementation of the graphical interface, the cursor can be represented not only by a flickering rectangle, but also by some area covering several characters and even part of the screen. This selected area differs from other, unselected parts (usually in color).

Ú Pressing the Enter key does not always execute the command and move to the next line. The response to pressing any key depends largely on which part of the screen the cursor was on.

Ú In addition to the Enter key, there is an increasing use of "gray" cursor keys on the keyboard (see the keyboard section in issue 3 of this series.)

Ú Already in this edition of the graphical interface, manipulators began to be used (such as a mouse, trackball, etc. - see Figure A.4.) Οʜᴎ allowed you to quickly select the desired part of the screen and move the cursor.

Summing up, we can give the following distinctive features of this interface:

Ú Highlight areas of the screen.

Ú Redefining keyboard keys based on context.

Ú Using manipulators and gray keyboard keys to control the cursor.

Ú Extensive use of color monitors.

The appearance of this type of interface coincides with the widespread use of the MS-DOS operating system. It was she who introduced this interface to the masses, thanks to which the 80s were marked by the improvement of this type of interface, the improvement of character display characteristics and other monitor parameters.

A typical example of using this kind of interface is the Nortron Commander file shell and the Multi-Edit text editor. And text editors Lexicon, ChiWriter and word processor Microsoft Word for Dos are an example of how this interface has outdone itself.

The second stage in the development of the graphical interface was the "pure" WIMP interface. This subspecies of the interface is characterized by the following features:

Ú All work with programs, files and documents takes place in windows - certain parts of the screen outlined by a frame.

Ú All programs, files, documents, devices and other objects are represented as icons - icons. When opened, the icons turn into windows.

Ú All actions with objects are implemented using the menu. Although the menu appeared at the first stage of the development of the graphical interface, it did not have a dominant meaning in it, but served only as an addition to the command line. In a pure WIMP interface, the menu becomes the main control element.

Ú Extensive use of manipulators to point to objects. The manipulator ceases to be just a toy - an addition to the keyboard, but becomes the main control element. With the help of the manipulator, they point to any area of ​​the screen, windows or icons, select it, and only then, through the menu or using other technologies, they control them.

It should be noted that WIMP requires a high-resolution color raster display and a manipulator for its implementation.
Hosted on ref.rf
Also, programs focused on this type of interface impose increased requirements on computer performance, memory size, bus bandwidth, etc. At the same time, this type of interface is the easiest to learn and intuitive. For this reason, the WIMP interface has now become the de facto standard.

A striking example of programs with a graphical interface is the Microsoft Windows operating system.

SILK- interface (Speech - speech, Image - image, Language - language, Knowlege - knowledge). This type of interface is closest to the usual, human form of communication. Within the framework of this interface, there is a normal "conversation" between a person and a computer. At the same time, the computer finds commands for itself by analyzing human speech and finding key phrases in it. It also converts the result of command execution into a human-readable form. This type of interface is the most demanding on the hardware resources of a computer, and therefore it is used mainly for military purposes.

Since the mid-90s, after the appearance of inexpensive sound cards and the widespread use of speech recognition technologies, the so-called "speech technology" of the SILK interface has appeared. With this technology, commands are given by voice by pronouncing special reserved words - commands.

Words should be pronounced clearly, at the same pace. There is a pause between words. Due to the underdevelopment of the speech recognition algorithm, such systems require individual pre-configuration for each specific user.

The "speech" technology is the simplest implementation of the SILK interface.

Biometric technology ("Mimic Interface".)

This technology originated in the late 1990s and is still being developed at the time of this writing. To control the computer, a person's facial expression, the direction of his gaze, the size of the pupil, and other signs are used. To identify the user, the pattern of the iris of his eyes, fingerprints and other unique information is used. Images are read from a digital video camera, and then commands are extracted from this image using special image recognition programs. This technology is likely to take its place in software products and applications where it is important to accurately identify a computer user.

Liked the article? Share with friends!
Was this article helpful?
Yes
Not
Thanks for your feedback!
Something went wrong and your vote was not counted.
Thanks. Your message has been sent
Did you find an error in the text?
Select it, click Ctrl+Enter and we'll fix it!