Methods and tools for developing user interface: state of the art. Interfaces and technologies. Generalized user interfaces

The interaction of an operator with a computer is an important link in the computing process in solving various applied problems, both scientific and industrial. Creation of programs in the field of organizing market relations when creating information sites for various organizations and enterprises, when creating programs for managing production processes, accounting for products and their sales, quality management, and even with such a task as sorting e-mail by a secretary, the development of user-friendly interaction is required with a computer.

Design- an iterative process by which the requirements for the software are translated into the engineering representations of the software. Usually, there are two stages in design: preliminary design and detailed design. Pre-design forms abstractions at the architectural level, detailed design refines these abstractions. In addition, in many cases, interface design is distinguished, the purpose of which is to form a graphical user interface (GUI). A diagram of information links of the design process is shown in Fig.

Definition of the interface.

All in all, interface (interface) – it is a set of logical and physical principles of interaction between components of technical means of a computing system (CS), i.e. a set of rules of algorithms and time agreements for the exchange of data between components of an AC (logical interface), as well as a set of physical, mechanical and functional characteristics of connection devices that implement such interaction (physical interface).

Interface often also referred to as hardware and software that implement the interface between devices and aircraft nodes.

The interface covers all logical and physical means of interaction of the computing system with the external environment, for example, with the operating system, with the operator, etc.

Types of interfaces

Interfaces are distinguished by such characteristics as the structure of connections, the method of connecting and transmitting data, the principles of control and synchronization.

    Intra-machine interface - communication system and means of interfacing of nodes and computer units with each other. An intramachine interface is a set of electrical communication lines (wires), interfacing circuits with computer components, protocols (algorithms) for signal transmission and conversion.

There are two options for organizing within the machine interface:

Multi-connected interface, in which each PC unit is connected to other units with its own local wires;

A single-link interface, as a result of which all PC units are connected to each other via a common or system bus.

2. External interface - communication system of the system unit with computer peripherals or with other computers

There are also several types of frontend here:

Interface of peripheral devices connected using input-output buses (ISA, EISA, VLB, PCI, AGP, USB IEEE 1384 SCSI, etc.);

A network interface such as a peer-to-peer or client-server network with star, ring, or bus topologies.

3. Human-machine interface or human-computer interface or user interface Is the way in which you perform a task using any means (any program), namely the actions you perform and what you receive in response.

An interface is human-centered if it meets the needs of the individual and takes into account their weaknesses.

Machine part of the interface - a part of the interface implemented in the machine (its hardware and software part) using the capabilities of computer technology.

Human part of the interface - this is a part of the interface, implemented by a person, taking into account his capabilities, weaknesses, habits, learning ability and other factors.

The most common interfaces are defined by national and international standards.

In what follows, only the user interface will be considered.

User interface classification

As mentioned above, the interface is, first of all, a set of rules that can be combined according to the similarity of the ways of human-computer interaction.

There are three types of user interfaces: command, WIMP and SILK - interfaces.

The interaction of the listed interfaces with operating systems and technologies is shown in Fig. 1:

Rice. 1. Interaction of user interfaces of their technologies and operating systems.

1. Command interface, in which the interaction of a person with a computer is carried out by giving the computer commands, which it executes and gives the result to the user. The command interface can be implemented as batch technology and command line technology. At present, batch technology is practically not used, and command line technology can be found as a backup method of human-computer communication.

Batch technology.

Historically, this type of technology appeared first on the electromechanical computers of K. Tsuze, G. Aikin, and then on the electronic computers of Eckert and Mauchli, on the domestic computers of Lebedev, Brusentsov, on the IBM-360 computer, on the ES computer, and so on. Its idea is simple and consists in the fact that a sequence of programs stuffed, for example, on punched cards, and a sequence of symbols that determine the order of execution of these programs are fed to the input of the computer. The man here has little influence on the operation of the machine. He can only suspend the operation of the machine, change the program and restart the computer.

Command line technology.

With this technology, a keyboard serves as a way of entering information by an operator into a computer, and a computer outputs information to a person using an alphanumeric display (monitor). The monitor-keyboard combination came to be called the terminal or console. Commands are typed in the command line, which is a prompt character and a blinking cursor, while the typed characters can be erased and edited. By pressing the "Enter" key, the computer accepts the command and begins to execute it. After moving to the beginning of the next line, the computer displays the results of its work on the monitor. The most common command interface was in the MS DOS operating system.

2. OOMU (window, image, menu, index)Wimp (window, image, menu, pointer) - interface. A characteristic feature of this interface is that the user-computer dialogue is conducted not using the command line, but using windows, menu icons, cursor and other elements. Although in this interface commands are given to the machine, but this is done through graphical images.

The idea for a graphical interface originated in the mid-1970s at the Xerox Palo Alto Research Center (PARC). The prerequisite for the graphical interface was a decrease in the response time of a computer to a command, an increase in the amount of RAM, as well as the development of the element base, technical characteristics of computers and, in particular, monitors. After the advent of graphic displays with the ability to display any graphic images of various colors, the graphical interface has become an integral part of all computers. Gradually, the process of unification in the use of the keyboard and mouse by application programs went through. The merger of these two trends has led to the creation of such a user interface with the help of which, with a minimum investment of time and money for retraining of personnel, you can work with any software application.

This kind of interface is implemented in two levels:

Simple graphical interface;

Full WINP - interface.

Simple graphical interface , which at the first stage was very similar to the command line technology with the following differences:

When displaying symbols in order to increase the expressiveness of the image, it was allowed to highlight some of the symbols with color, inverse image, underlining and flickering;

The cursor could be represented by a certain area, highlighted in color and covering several characters and even part of the screen;

The reaction to pressing any key has become largely dependent on where the cursor is located.

In addition to the frequently used cursor control keys, manipulators such as a mouse, trackball, etc. began to be used, which made it possible to quickly select the desired area of ​​the screen and move the cursor;

Extensive use of color monitors.

The emergence of a simple graphical interface coincides with the widespread adoption of the MS DOS operating system. A typical example of its use is the Norton Commander file shell and the text editors MaltiEdit, ChiWriter, Microsoft Word for DOS, Lexicon, etc.

Full Wimp -interface , was the second stage in the development of the graphical interface, which is characterized by the following features:

All work with programs, files and documents takes place in windows;

Programs, files, documents, devices and other objects are represented as icons (icons), which, when opened, turn into windows;

All actions with objects are carried out using the menu, which becomes the main control element;

The manipulator acts as the main control device.

It should be noted that the WIMP interface requires for its implementation an increased requirement for the performance of a computer, the volume of its memory of a high-quality raster color display of software oriented to this type of interface. Currently, the WIMP interface has become the de facto standard, and the Microsoft Windows operating system has become a prominent representative of it.

3. ROYAZ (speech, image, language, knowledge)SILK (speech, image, language, knowledge) - interface. This interface is closest to the usual human form of communication. Within the framework of this interface, there is a normal conversation between a person and a computer. At the same time, the computer finds commands for itself, analyzing human speech and finding key phrases in it. It also converts the results of command execution into a human-readable form. This type of interface requires large hardware costs, therefore it is under development and improvement and is still used only for military purposes.

SILK-interface for human-machine communication uses:

Speech technology;

Biometric technology (mimic interface);

Semantic (public) interface.

Speech technology appeared in the mid-90s after the advent of inexpensive sound cards and the widespread adoption of speech recognition technologies. With this technology, commands are given by voice by pronouncing special standard words (commands), which must be pronounced clearly, at the same pace with obligatory pauses between words. Considering that speech recognition algorithms are not sufficiently developed, individual preliminary adjustment of the computer system for a specific user is required. This is the simplest implementation of the SILK interface.

Biometric technology ("Mimic interface") originated in the late 90s and is currently under development. The computer uses facial expression, gaze direction, pupil size, and other human characteristics. To identify the user, a drawing of the iris of his eyes, fingerprints and other unique information is used, which is read from a digital camera, and then commands are extracted from this image using a pattern recognition program.

Semantic (public) interface emerged in the late 70s of the twentieth century, with the development of artificial intelligence. It is difficult to call it an independent type of interface, since it includes a command line interface, graphical, speech and mimic interfaces. Its main feature is the lack of commands when communicating with a computer. The request is formed in natural language, in the form of associated text and images. In fact, it is a simulation of human-computer communication. Currently used for military purposes. Such an interface is extremely necessary in an air combat environment.

"Transmission mechanism" - Lesson summary. Class 3 technology. Training in the design of various technical models with a drive mechanism. Cross gear - when the wheels are turning in different directions. Types of gears: 1 - belt; 2 - chain; 3 - toothed. Products with gear: conveyor, crane, mill. The main part of the mill design is the transmission mechanism.

"Computer interfaces" - User interface. Software. Utilities. Personal computer as a system. Provided by the operating system of the computer. Indicate the inputs and outputs. Hardware interface. Hardware-software interface. Operating system. Text files. System programs. Hardware-software interface - the interaction of hardware and software of a computer.

"Technologies in the classroom" - Forms of organization can be different: lesson, group, individual, pair. Active and interactive methods are used by me from grades 5 to 11. Types of technologies: Technology of student-centered learning. Developmental education technology. Personality-oriented learning technology Design and research technology.

"Educational technologies at school" - Laboratory of unsolved problems. Methodological support for creative projects of educational institutions and teachers. Game technologies. Growth in the indicator of the use of ICT in the educational process. Dissemination of advanced teaching experience. Reducing the number of repeaters. The growth of teachers' skill, the impact on the quality of the lesson.

"Technology 6 - 7 - 8 grade" - How is electrical energy measured? What measurement determines the size of the shoulder garment? What, according to popular beliefs, meant the beginning of all living things? What part drives all the working parts of the sewing machine? Raw materials for making a carriage for Cinderella. What is the function of the grooves on the needle blade?

"Sections of technology" - And we have from shiny beads - Unusual beauty. Subject - Technology. Patchwork has long been known to many peoples. National holidays and ceremonies, national dress. They talk about the traditions of different nations, national holidays and rituals. After baking, cool the pampushki a little, grate with crushed garlic.

TEST

by discipline

"System software"

Topic: "User Interface"



Introduction

1. Concept of the user interface

2. Types of interfaces

2.1 Command interface

2.2 Graphical interface

2.2.1 Simple graphical interface

2.2.2 WIMP - interface

2.3 Speech technology

2.4 Biometric technology

2.5 Semantic (public) interface

2.6 Types of interfaces

3. Methods and tools for developing user interface

4. Standardization of the user interface

Bibliography


Introduction


As you know, the process of information technology penetration into practically all spheres of human activity continues to develop and deepen. In addition to the already familiar and widespread personal computers, the total number of which has reached many hundreds of millions, there are more and more embedded computing facilities. There are more and more users of all this diverse computing technology, and the development of two seemingly opposite tendencies is observed. On the one hand, information technologies are becoming more and more complicated, and for their application, and even more so for further development, one needs to have very deep knowledge. On the other hand, the user-computer interfaces are simplified. Computers and information systems are becoming more and more friendly and understandable even for a person who is not a specialist in the field of computer science and computer technology. This became possible primarily because users and their programs interact with computers through special (system) software - through the operating system. The operating system provides interfaces to both running applications and users.


1. Concept of the user interface


Interface - a set of technical, software and methodological (protocols, rules, agreements) means of interfacing in the computing system of users with devices and programs, as well as devices with other devices and programs.

Interface - in the broadest sense of the word, it is a way (standard) of interaction between objects. The interface in the technical sense of the word defines the parameters, procedures and characteristics of the interaction of objects. Distinguish:

User interface - a set of methods of interaction between a computer program and the user of this program.

Programming interface is a set of methods for interaction between programs.

A physical interface is a way of communication between physical devices. Most often we are talking about computer ports.

A user interface is a collection of software and hardware that provides a user interaction with a computer. Dialogues form the basis of such interaction. In this case, a dialogue is understood as a regulated exchange of information between a person and a computer, carried out in real time and aimed at jointly solving a specific problem. Each dialog consists of separate I / O processes that physically provide communication between the user and the computer. The exchange of information is carried out by sending a message.


Fig. 1. User interaction with the computer


Basically, the user generates messages of the following types:

information request

help request

operation or function request

entering or changing information

In response, the user receives hints or help; informational messages requiring a response; orders requiring action; error messages and other information.

The computer application user interface includes:

means for displaying information, displayed information, formats and codes;

command modes, user-interface language;

dialogues, interaction and transactions between the user and the computer, user feedback;

decision support in a specific subject area;

the procedure for using the program and its documentation.

The user interface (UI) is often understood only as the appearance of the program. However, in reality, the user perceives through him the entire program as a whole, which means that such an understanding is too narrow. In reality, UI unites in itself all the elements and components of a program that are capable of influencing the interaction of a user with software (software).

It is not only the screen that the user sees. These elements include:

a set of user tasks that he solves using the system;

the metaphor used by the system (for example, the desktop in MS Windows®);

system controls;

navigation between system blocks;

visual (and not only) design of program screens;

information display means, displayed information and formats;

data entry devices and technologies;

dialogues, interactions and transactions between the user and the computer;

user feedback;

decision support in a specific subject area;

the procedure for using the program and its documentation.


2. Types of interfaces


An interface is, first of all, a set of rules. Like any rules, they can be generalized, collected in a "code", grouped according to a common criterion. Thus, we have come to the concept of "interface type" as a combination by the similarity of the ways of interaction between humans and computers. Briefly, we can offer the following schematic classification of various interfaces of human-computer communication.

Modern types of interfaces are:

1) Command interface. The command interface is called so because in this type of interface a person gives "commands" to the computer, and the computer executes them and gives the result to the person. The command interface is implemented as batch technology and command line technology.

2) WIMP - interface (Window - window, Image - image, Menu - menu, Pointer - pointer). A characteristic feature of this type of interface is that the dialogue with the user is conducted not with the help of commands, but with the help of graphic images - menus, windows, and other elements. Although in this interface commands are given to the machine, but this is done "indirectly", through graphic images. This kind of interface is implemented on two levels of technologies: a simple graphical interface and a "pure" WIMP - interface.

3) SILK - interface (Speech - speech, Image - image, Language - language, Knowlege - knowledge). This type of interface is the closest to the usual, human form of communication. Within the framework of this interface, there is a normal "conversation" between a person and a computer. At the same time, the computer finds commands for itself, analyzing human speech and finding key phrases in it. It also converts the result of executing commands into a human-readable form. This type of interface is the most demanding on the hardware resources of a computer, and therefore it is used mainly for military purposes.

2.1 Command interface


Batch technology. Historically, this type of technology appeared first. It already existed on the Sius and Zuse relay machines (Germany, 1937). Its idea is simple: a sequence of characters is sent to the input of a computer, in which, according to certain rules, the sequence of programs launched for execution is indicated. After the execution of the next program, the next one starts, etc. The machine finds commands and data for itself according to certain rules. This sequence can be, for example, a punched tape, a stack of punched cards, a sequence of pressing the keys of an electric typewriter (such as CONSUL). The machine also issues its messages to a puncher, an alphanumeric printing device (ADC), a typewriter tape. Such a machine is a "black box" (more precisely, a "white cabinet"), into which information is constantly fed and which also constantly "informs" the world about its state (see Figure 1). A person here has little influence on the operation of the machine - he can only suspend the machine, change the program and restart the computer. Subsequently, when the machines became more powerful and could serve several users at once, the eternal waiting of users like: "I sent data to the machine. I am waiting for her to answer. And will she answer at all?" - became, to put it mildly, annoying. In addition, computing centers, after newspapers, have become the second largest "producer" of waste paper. Therefore, with the advent of alphanumeric displays, the era of truly user-friendly technology - the command line - began.

Fig. 2. Main computer view of the ES EVM series


Command line technology. With this technology, the keyboard serves as the only way to enter information from a person to a computer, and the computer outputs information to a person using an alphanumeric display (monitor). This combination (monitor + keyboard) became known as the terminal, or console. Commands are typed on the command line. The command line is a prompt symbol and a blinking rectangle - the cursor. When you press a key, characters appear at the cursor position, and the cursor moves to the right. It's a lot like typing a command on a typewriter. However, unlike it, letters are displayed on the display, not on paper, and an incorrectly typed character can be erased. The command ends by pressing the Enter (or Return) key. After that, it moves to the beginning of the next line. It is from this position that the computer displays the results of its work on the monitor. Then the process is repeated. Command line technology has already worked on monochrome alphanumeric displays. Since only letters, numbers and punctuation marks were allowed to be entered, the technical characteristics of the display were not essential. A television receiver and even an oscilloscope tube could be used as a monitor.

Both of these technologies are implemented in the form of a command interface - commands are given to the machine as input, and it seems to "respond" to them.

Text files became the predominant type of files when working with the command interface - they and only they could be created using the keyboard. The most widespread use of the command line interface was the emergence of the UNIX operating system and the appearance of the first eight-bit personal computers with the multi-platform CP / M operating system.


2.2 Graphical interface


How and when did the graphical interface appear? His idea originated in the mid-70s, when the concept of a visual interface was developed at the Xerox Palo Alto Research Center (PARC). The prerequisite for the graphical interface was a decrease in the response time of a computer to a command, an increase in the amount of RAM, as well as the development of the technical base of computers. The hardware basis of the concept, of course, was the appearance of alphanumeric displays on computers, and these displays already had such effects as "flickering" of characters, color inversion (changing the style of white characters on a black background to the opposite, that is, black characters on a white background ), underscore characters. These effects did not extend to the entire screen, but only to one or more characters. The next step was to create a color display that allows you to display, along with these effects, characters in 16 colors on a background with a palette (that is, a color set) of 8 colors. After the appearance of graphic displays, with the ability to display any graphic images in the form of many dots on the screen of various colors, there were no limits to imagination in using the screen at all! PARC's first graphical 8010 Star Information System thus appeared four months before the first IBM computer was released in 1981. Initially, the visual interface was used only in programs. Gradually, he began to switch to operating systems used first on Atari and Apple Macintosh computers, and then on IBM-compatible computers.

From an earlier time, and under the influence of these concepts as well, there has been a process of unification in the use of keyboard and mouse by application programs. The merger of these two trends has led to the creation of that user interface, with the help of which, with a minimum investment of time and money for retraining of personnel, it is possible to work with any software product. This part is devoted to the description of this interface, common to all applications and operating systems.


2.2.1 Simple graphical interface

At the first stage, the graphical interface was very similar to command line technology. The differences from the command line technology were as follows:

1. When displaying symbols, it was allowed to highlight some of the symbols with color, inverse image, underline and blinking. This has increased the expressiveness of the image.

2. Depending on the specific implementation of the graphical interface, the cursor may appear not only as a flickering rectangle, but also as a certain area covering several characters and even part of the screen. This selection is different from the other unselected parts (usually in color).

3. Pressing the Enter key does not always execute the command and go to the next line. The reaction to pressing any key largely depends on where the cursor was on the screen.

4. In addition to the Enter key, the "gray" cursor keys are increasingly used on the keyboard.

5. Already in this edition of the graphical interface, manipulators (such as a mouse, a trackball, etc. - see Fig. 3) began to be used. They made it possible to quickly select the desired part of the screen and move the cursor.


Fig. 3. Manipulators


Summing up, the following distinctive features of this interface can be cited.

1) Selection of areas of the screen.

2) Redefine keyboard keys depending on the context.

3) Using manipulators and gray keyboard keys to control the cursor.

4) Extensive use of color monitors.

The emergence of this type of interface coincides with the widespread adoption of the MS-DOS operating system. It was she who introduced this interface to the masses, thanks to which the 80s passed under the sign of improving this type of interface, improving the characteristics of displaying symbols and other parameters of the monitor.

Typical examples of using this kind of interface are the Nortron Commander file shell (see below for file shells) and the Multi-Edit text editor. And the text editors Lexicon, ChiWriter and the word processor Microsoft Word for Dos are examples of how this interface has outdone itself.

2.2.2 WIMP - interface

The second stage in the development of the graphical interface was the "pure" WIMP interface. This interface subtype is characterized by the following features.

1. All work with programs, files and documents takes place in windows - certain parts of the screen outlined with a frame.

2. All programs, files, documents, devices and other objects are presented in the form of icons - icons. When opened, the icons turn into windows.

3. All actions with objects are carried out using the menu. Although the menu appeared at the first stage of the development of the graphical interface, it did not have a dominant meaning in it, but served only as an addition to the command line. In a pure WIMP interface, the menu becomes the main control.

4. Extensive use of manipulators to indicate objects. The manipulator ceases to be just a toy - an addition to the keyboard, but becomes the main control element. Using the manipulator, they POINT at any area of ​​the screen, windows or icons, highlight it, and only then, through the menu or using other technologies, they are controlled.

It should be noted that WIMP requires a high-resolution color raster display and manipulator for its implementation. Also, programs focused on this type of interface impose increased requirements on the performance of the computer, the volume of its memory, bus bandwidth, etc. However, this kind of interface is the easiest to learn and most intuitive. Therefore, now WIMP - the interface has become the de facto standard.

A striking example of programs with a graphical interface is the Microsoft Windows operating system.

2.3 Speech technology


Since the mid-90s, after the advent of inexpensive sound cards and the widespread use of speech recognition technologies, the so-called "speech technology" SILK - interface appeared. With this technology, commands are given by voice by pronouncing special reserved words - commands. The main such commands (according to the rules of the "Gorynych" system) are:

"Rest" - turn off the speech interface.

"Open" - switch to the mode of calling one or another program. The name of the program is named in the next word.

"I will dictate" - switching from command mode to voice typing mode.

"Command mode" - return to voice command mode.

and some others.

Words should be pronounced clearly, at the same pace. A pause is required between words. Due to the underdevelopment of the speech recognition algorithm, such systems require individual pre-configuration for each specific user.

"Speech" technology is the simplest implementation of the SILK interface.


2.4 Biometric technology


This technology emerged in the late 1990s and is still under development at the time of this writing. To control a computer, a person's facial expression, the direction of his gaze, the size of the pupil and other signs are used. To identify the user, a drawing of the iris of his eyes, fingerprints and other unique information is used. Images are read from a digital video camera, and then, using special pattern recognition software, commands are extracted from this image. This technology is likely to take its place in software products and applications where it is important to accurately identify the computer user.


2.5 Semantic (public) interface


This type of interface emerged in the late 70s of the XX century, with the development of artificial intelligence. It can hardly be called an independent type of interface - it includes a command line interface, a graphical, speech and mimic interface. Its main distinguishing feature is the lack of commands when communicating with a computer. The request is formed in natural language, in the form of associated text and images. In its essence, it is difficult to call it an interface - it is already a simulation of the "communication" of a person with a computer. Since the mid-90s of the XX century, publications related to the semantic interface have no longer been encountered. It seems that due to the important military significance of these developments (for example, for the autonomous conduct of modern combat by machines - robots, for "semantic" cryptography), these areas were classified. Information that these studies are ongoing sometimes appears in periodicals (usually in the sections of computer news).


2.6 Types of interfaces


User interfaces are of two types:

1) procedural oriented:

primitive

with free navigation

2) object oriented:

direct manipulation.

The procedure-oriented interface uses a traditional user interaction model based on the concepts of "procedure" and "operation". Within the framework of this model, the software provides the user with the ability to perform certain actions for which the user determines the correspondence of the data and the result of which is to obtain the desired result.

Object-oriented interfaces use a user interaction model focused on manipulating objects in the domain. Within the framework of this model, the user is given the opportunity to directly interact with each object and initiate the execution of operations during which several objects interact. The user's task is formulated as a purposeful change of some object. An object is understood in the broad sense of the word - a model of a database, a system, etc. An object-oriented interface assumes that the interaction with the user is carried out by selecting and moving the icons of the corresponding object-oriented area. Distinguish between single-document (SDI) and multi-document (MDI) interfaces.

Procedural-Oriented Interfaces:

1) Provide the user with the functions necessary to complete the tasks;

2) The emphasis is on tasks;

3) Icons represent applications, windows or operations;

Object Oriented Interfaces:

1) Provides the user with the ability to interact with objects;

2) The emphasis is on inputs and results;

3) Pictograms represent objects;

4) Folders and directories are visual containers of objects.

An interface is called primitive that organizes user interaction and is used in console mode. The only deviation from the sequential process that data provides is looping through multiple sets of data.

Menu interface. Unlike the primitive interface, it allows the user to select an operation from a special list displayed by the program. These interfaces assume the implementation of many scenarios of work, the sequence of actions in which is determined by the users. The tree-like organization of the menu assumes a strictly limited implementation. In this case, two options for organizing the menu are possible:

each menu window occupies the entire screen

there are several multilevel menus on the screen at the same time (Windows).

In conditions of limited navigation, regardless of the implementation option, finding an item with more than two-level menu turns out to be a rather difficult task.

Free navigation interface (graphical interface). Supports the concept of interactive software interaction, visual feedback with the user and the ability to directly manipulate the object (buttons, indicators, status bars). Unlike the Menu interface, the free navigation interface provides the ability to perform any operations allowed in a particular state, access to which is possible through various interface components (hot keys, etc.). An interface with free navigation is implemented using event-driven programming, which implies the use of visual development tools (via messages).

3. Methods and tools for developing user interface


The interface is essential for any software system and is an integral part of it, focused primarily on the end user. It is through the interface that the user judges the application as a whole; moreover, the user often makes the decision to use an application based on how convenient and understandable the user interface is. At the same time, the complexity of the design and development of the interface is quite large. According to experts, on average, it is more than half of the project implementation time. It is important to reduce the cost of developing and maintaining software systems or developing effective software tools.

One of the ways to reduce the cost of developing and maintaining software systems is the availability of fourth-generation tools in the toolkit, which make it possible to describe (specify) the created software tool at a high level and then automatically generate executable code according to the specification.

In the literature, there is no single generally accepted classification of tools for developing user interface. So, user interface development software can be divided into two main groups - toolkits and higher-level development tools. Toolkit for user interface development, as a rule, includes a library of primitives of interface components (menus, buttons, scroll bars, etc.) and is intended for use by programmers. High-level interface development tools can be used by non-programmers and are provided with a language that allows you to specify the I / O functions, as well as to define, using direct manipulation techniques, interface elements. These tools include interface builders and UIMS - User Interface Management Systems (UIMS). In addition to the PMS, some authors use terms such as User Interface Development Systems (UIDS) - user interface development systems, User Interface Design Environment (UIDE) - user interface development environment, etc.

Dedicated interface development tools simplify user interface development by prompting the developer to specify user interface components using specification languages. There are several main ways to specify the interface:

1. Linguistic, when special languages ​​are used to set the interface syntax (declarative, object-oriented, event languages, etc.).

2. A graphical specification is concerned with defining an interface, typically by visual programming, programming demonstrations, and by example. This way supports a limited class of interfaces.

3. An interface specification based on an object-oriented approach is associated with a principle called direct manipulation. Its main property is the user's interaction with individual objects, and not with the entire system as a whole. Typical components used for manipulating objects and control functions are handlers, menus, dialog zones, buttons of various kinds.

4. Interface specification according to the application specification. Here the interface is created automatically according to the specification of the semantics of the application. However, the complexity of describing the interface makes it difficult to soon emerge systems that implement this approach.

The main concept of the ISMS is to separate the user interface development from the rest of the application. Currently, the idea of ​​separate design of the interface and the application is either enshrined in the definition of the AIMS or is its main property.

The composition of the PMS is defined as a set of development and runtime tools. Development-stage tools operate on interface models to build their projects. They can be divided into two groups: interactive tools, such as model editors, and automatic tools, such as a shape generator. Runtime tools use an interface model to support user activities such as collecting and analyzing data used.

The functions of the PMS are to facilitate and facilitate the development and maintenance of the user interface, as well as the management of the interaction between the user and the application program.

Thus, there are currently a large number of interface development tools that support various methods of its implementation.


4. Standardization of the user interface


In the first approach, the assessment is made by the end user (or tester), summarizing the results of working with the program in the framework of the following indicators ISO 9241-10-98 Ergonomic requirements for office work with visual display terminals (VDTs). P.11. Guidance on usability specification and measures:

effectiveness - the influence of the interface on the completeness and accuracy of the user's achievement of target results;

productivity (efficiency) or the effect of the interface on the productivity of the user;

the degree of (subjective) satisfaction of the end user with this interface.

Efficiency is the criterion for the functionality of the interface, and the degree of satisfaction and, indirectly, productivity is the criterion for ergonomics. The measures introduced here are in line with the general pragmatic concept of quality assessment in terms of the goal / cost ratio.

The second approach attempts to establish which (guiding ergonomic) principles the user interface should satisfy in terms of the optimality of human-machine interaction. The development of this analytical approach has been driven by the needs of software design and development, as it provides guidelines for the organization and characteristics of an optimal user interface. This approach can be used to assess the quality of the developed user interface. In this case, the quality score is assessed by an expert on the degree of implementation of the guidelines or the resulting more specific graphical and operational features of an optimal "human-centered" user interface.

Standardization and design. When designing a user interface, the initial decision is to choose the basic standards for the types of interface controls, which should take into account the specifics of the relevant subject area. The specification of the style of the user interface is carried out in the regulatory documents of the industry and corporate level. Further detailing of the interface design for a specific group of software products from the developer is possible. When developing a user interface, it is necessary to take into account the characteristics of the intended end users of the software being developed. The user interface type specification defines only its syntactics. The second direction of standardization in the field of design is the formation of a specific system of guiding ergonomic principles. The decision on their selection should be made jointly by all members of the design team. This system should be consistent with the relevant underlying standard (or group of standards). To be an effective design tool, a system of guidelines must be brought up to the level of specific instructions for programmers. When developing the instructions, regulatory documents on the type (style) of the interface are taken into account, and the regulatory documents for the design of the user interface should be included in the profile of the software project standards and in the terms of reference.

Standards and quality. Formally, it is appropriate to link the standardization of the user interface with other infrastructural sub-characteristics of the quality of a software product, such as conformance (including compliance with standards) and replaceability (GOST R ISO IEC 9126-93). The choice of a specific design tool (rapid application development languages, CASE tools, graphical interface designers) can lead the developer to the need to adhere to the interface standard underlying it.

On the other hand, the choice by the developer of the standard of the type (style) of the user interface, adequate to the subject area and the operating system used, has the potential to ensure, at least in part, the fulfillment of such principles of the quality of the user interface as naturalness and consistency within the working environment. Explicit consideration of interface syntactics makes it easier to create a consistently styled and predictable user interface. In addition, it should be borne in mind that when developing the standard itself, the basic principles of user interface design were already taken into account.

The usability measures introduced in ISO 9241-11 can be used by the contracting authority prior to the development of the custom system as a general framework for defining the usability requirements that the future system must meet and against which acceptance tests will be carried out. Thus, a basis is created to ensure the completeness, measurability and comparability of these requirements, which can indirectly have a positive impact on the quality of the designed software product.

Does it mean that strict adherence to standards can provide the necessary quality of the user interface? For simple and routine applications - adherence to the standard guarantees only the minimum level of quality. For complex and pioneering applications, the requirement for completeness may conflict with the limited capabilities provided by the user interface controls standard.


Bibliography


T.B. Bolshakov, D.V. Irtegov. Operating systems. Site materials http: // www. citforum. ru / operating_systems / ois / introd. shtml.

Methods and tools for developing user interface: current state, A.S. Kleschev. , Gribova V.V. , 2001. Materials of the site http: // www. swsys. ru / index. php? page = article & id = 765.

Like any technical device, a computer exchanges information with a person through a set of certain rules that are binding on both a machine and a person. These rules in the computer literature are called the interface. The interface should be clear and incomprehensible, friendly and not. Many adjectives suit it. But in one thing he is constant: he is, and you can't get away from him.

Interface- these are the rules for the interaction of the operating system with users, as well as the neighboring levels in the computer network. The technology of human-computer communication depends on the interface.

Interface is, above all, a set of rules. Like any rules, they can be generalized, collected in a "code", grouped according to a common criterion. Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, we came to the concept of "interface type" as a combination by the similarity of the ways of interaction between humans and computers. The following schematic classification of various human-computer communication interfaces can be proposed (Fig. 1).

Batch technology. Historically, this type of technology appeared first. It already existed on the Süs and Züse relays machines (Germany, 1937). Its idea is simple: a sequence of characters is sent to the input of a computer, in which, according to certain rules, the sequence of programs launched for execution is indicated. After the execution of the next program, the next one starts, etc. The machine finds commands and data for itself according to certain rules. This sequence can be, for example, a punched tape a stack of punched cards, a sequence of pressing the keys of an electric typewriter (of the CONSUL type). The machine also issues its messages to a puncher, an alphanumeric printing device (ADC), a typewriter tape.

Such a machine is a "black box" (more precisely, a "white cabinet"), into which information is constantly fed and which also constantly "informs" the world about its state. The man here has little influence on the operation of the machine - he can only suspend the operation of the machine, change the program and restart the computer. Subsequently, when the machines became more powerful and could serve several users at once, the eternal waiting of users of the type: "I sent the data to the machine. I am waiting for her to answer. And will she answer at all?" - it became, to put it mildly, to eat. In addition, computing centers, following newspapers, have become the second largest "producer" of waste paper. For this reason, the advent of alphanumeric displays ushered in the era of truly custom technology - the command line.

Command interface.

The command interface is usually called so because in this type of interface a person gives "commands" to the computer, and the computer executes them and gives the result to the person. The command interface is implemented as batch technology and command line technology.

With this technology, the keyboard serves as the only way to enter information from a person to a computer, and the computer outputs information to a person using an alphanumeric display (monitor). This combination (monitor + keyboard) became known as the terminal, or console.

Commands are typed on the command line. The command line is a prompt symbol and a blinking rectangle - the cursor.
Posted on ref.rf
When you press a key, characters appear at the cursor position, and the cursor moves to the right. The command ends by pressing the Enter (or Return) key. After that, it moves to the beginning of the next line. It is from this position that the computer displays the results of its work on the monitor. Then the process is repeated.

Command line technology has already worked on monochrome alphanumeric displays. Since only letters, numbers and punctuation marks were allowed to be entered, the technical characteristics of the display were not essential. A television receiver and even an oscilloscope tube could be used as a monitor.

Both of these technologies are implemented in the form of a command interface - the machine is given commands to the input, and it seems to "respond" to them.

Text files became the predominant type of files when working with the command interface - they and only they could be created using the keyboard. The most widespread use of the command line interface was the emergence of the UNIX operating system and the appearance of the first eight-bit personal computers with the multi-platform CP / M operating system.

WIMP - interface(Window - window, Image - image, Menu - menu, Pointer - pointer). A characteristic feature of this type of interface is that the dialogue with the user is conducted not with the help of commands, but with the help of graphic images - menus, windows, and other elements. Although in this interface commands are given to the machine, but this is done "indirectly", through graphic images. The idea for the GUI originated in the mid-1970s, when the concept of a visual interface was developed at Xerox Palo Alto Research Center (PARC). The prerequisite for the graphical interface was a decrease in the response time of a computer to a command, an increase in the amount of RAM, as well as the development of the technical base of computers. The hardware basis of the concept, of course, was the appearance of alphanumeric displays on computers, and these displays already had such effects as "flickering" of characters, color inversion (changing the style of white characters on a black background to the opposite, that is, black characters on a white background ), underscore characters. These effects did not extend to the entire screen, but only to one or more characters. The next step was to create a color display that allows you to display, along with these effects, characters in 16 colors on a background with a palette (that is, a color set) of 8 colors. After the appearance of graphic displays, with the possibility of displaying any graphic images in the form of many dots on a screen of different colors there were no limits to imagination in using the screen! PARC's first graphical 8010 Star Information System thus appeared four months before the first IBM computer was released in 1981. Initially, the visual interface was used only in programs. Gradually, he began to switch to operating systems used first on Atari and Apple Macintosh computers, and then on IBM-compatible computers.

From an earlier time, and under the influence of these concepts as well, there has been a process of unification in the use of keyboard and mouse by application programs. The merger of these two trends has led to the creation of that user interface, with the help of which, with a minimum investment of time and money for retraining of personnel, it is possible to work with any software product. This part is devoted to the description of this interface, which is common to all applications and operating systems.

During its development, the graphical user interface has gone through two stages and is implemented at two levels of technologies: a simple graphical interface and a "pure" WIMP - interface.

At the first stage, the graphical interface was very similar to command line technology. The differences from the command line technology were as follows:

Ú When displaying symbols, it was allowed to highlight some of the symbols with color, inverse image, underline and blinking. This has increased the expressiveness of the image.

Ú Given the dependence on the specific implementation of the graphical interface, the cursor may appear not only as a flickering rectangle, but also as a certain area covering several characters and even part of the screen. This highlighted area differs from other unselected parts (usually in color).

Ú Pressing the Enter key does not always execute the command and go to the next line. The reaction to pressing any key largely depends on where the cursor was on the screen.

Ú In addition to the Enter key, the keyboard increasingly uses "gray" cursor keys (see the keyboard section in issue 3 of this series.)

Ú Already in this revision of the graphical interface, manipulators (such as a mouse, trackball, etc. - see Figure A.4.) Began to be used. Οʜᴎ They made it possible to quickly select the desired part of the screen and move the cursor.

Summing up, the following distinctive features of this interface can be cited:

Ú Highlight areas of the screen.

Ú Redefine keyboard keys based on context.

Ú Using pointing devices and gray keyboard keys to control the cursor.

Ú Extensive use of color monitors.

The emergence of this type of interface coincides with the widespread adoption of the MS-DOS operating system. It was she who introduced this interface to the masses, thanks to which the 80s passed under the sign of improving this type of interface, improving the display characteristics of symbols and other monitor parameters.

Typical examples of this kind of interface are the Nortron Commander file wrapper and the Multi-Edit text editor. And the text editors Lexicon, ChiWriter and the word processor Microsoft Word for Dos are examples of how this interface has surpassed itself.

The second stage in the development of the graphical interface was the "pure" WIMP interface. This interface subtype is characterized by the following features:

Ú All work with programs, files and documents takes place in windows - certain parts of the screen outlined with a frame.

Ú All programs, files, documents, devices and other objects are represented as icons - icons. When opened, the icons turn into windows.

Ú All actions with objects are carried out using the menu. Although the menu appeared at the first stage of the development of the graphical interface, it did not have a dominant meaning in it, but served only as an addition to the command line. In a pure WIMP interface, the menu becomes the main control.

Ú Extensive use of manipulators to indicate objects. The manipulator ceases to be just a toy - an addition to the keyboard, but becomes the main control element. Using the manipulator, they point at any area of ​​the screen, windows or icons, select it, and only then, through the menu or using other technologies, they are controlled.

It should be noted that WIMP requires a high-resolution color raster display and manipulator for its implementation.
Posted on ref.rf
Also, programs focused on this type of interface impose increased requirements on the performance of the computer, the amount of its memory, bus bandwidth, etc. Moreover, this type of interface is the easiest to learn and most intuitive. For this reason, now WIMP - the interface has become the de facto standard.

A striking example of programs with a graphical interface is the Microsoft Windows operating system.

SILK- interface (Speech - speech, Image - image, Language - language, Knowlege - knowledge). This type of interface is the closest to the usual, human form of communication. Within the framework of this interface, there is a normal "conversation" between a person and a computer. At the same time, the computer finds commands for itself, analyzing human speech and finding key phrases in it. It also converts the result of executing commands into a human-readable form. This type of interface is the most demanding on the hardware resources of a computer, and therefore it is used mainly for military purposes.

Since the mid-90s, after the advent of inexpensive sound cards and the widespread use of speech recognition technologies, the so-called "speech technology" SILK - interface appeared. With this technology, commands are given by voice by pronouncing special reserved words - commands.

Words should be pronounced clearly, at the same pace. A pause is required between words. Due to the underdevelopment of the speech recognition algorithm, such systems require individual pre-configuration for each specific user.

"Speech" technology is the simplest implementation of the SILK interface.

Biometric technology ("Mimic interface".)

This technology emerged in the late 1990s and is still under development at the time of this writing. To control a computer, a person's facial expression, the direction of his gaze, the size of the pupil and other signs are used. To identify the user, a drawing of the iris of his eyes, fingerprints and other unique information is used. Images are read from a digital video camera, and then, using special pattern recognition software, commands are extracted from this image. This technology is likely to take its place in software products and applications where it is important to accurately identify the computer user.

  • OOP
  • Last Monday I was lucky enough to get an interview for a Senior .Net Developer with an international company. During the interview, I was asked to take a test, where a number of questions were related to .Net. In particular, in one of the questions it was necessary to assess (true / false) a number of statements, among which there was the following:

    B.Net any array of elements, for example int, implements IList by default, which allows it to be used as a collection in a foreach statement.

    Quickly answering this question in the negative and adding separately in the margin. that foreach needs an implementation of not IList, but IEnumerable, I moved on to the next question. However, on the way home, I was tormented by the question: does the array still implement this interface or not?

    I vaguely remembered about IList that this interface gives me an IEnumerable, an indexer and a Count property containing the number of elements in the collection, as well as a couple of rarely used properties like IsFixedCollection (). The array has a Length property for its size, and Count in IEnumerable is an extension method from LINQ, which would not be possible if this method were implemented in a class. Thus, it turned out that the array could not implement the IList interface, but some vague feeling did not give me rest. So in the evening after the interview, I decided to do a little research.

    System.Array class

    Since I didn't have Reflector.Net installed, I just wrote a short C # program to find out what interfaces are implemented by an integer array.

    Var v = new int (1, 2, 3); var t = v.GetType (); var i = t.GetInterfaces (); foreach (var tp in i) Console.WriteLine (tp.Name);

    Here is a complete list of the resulting interfaces from the console window:

    ICloneable IList ICollection IEnumerable IStructuralComparable IStructuralEquatable IList`1 ICollection`1 IEnumerable`1 IReadOnlyList`1 IReadOnlyCollection`1

    Thus, an array in .Net still implements the IList interface and its generalized version IList<> .

    To get more information, I have drawn a diagram of the System.Array class.

    My mistake immediately caught my eye: Count was not a property of IList, but of ICollection, the previous interface in the inheritance chain. However, the array itself no longer had this property, as did many other properties of the IList interface, although other properties of this interface, IsFixedSize and IsReadOnly, were implemented. How is this even possible?

    Everything immediately falls into place when you remember that in C # you can implement interfaces not only
    implicitly, but also explicitly. I knew about this possibility from the textbooks, where an example of such an implementation was given in the case. when the base class already contains a method with the same name as the interface method. I've also seen this possibility in ReSharper. However, until now, I have not directly encountered the need to explicitly implement interfaces in my own projects.

    Comparison of Explicit and Implicit Implementation of Interfaces

    Let's compare these two kinds of interface implementations:
    Criteria
    Implicit implementation
    Explicit implementation
    Basic syntax
    interface ITest (void DoTest ();) public class ImplicitTest: ITest (public void DoTest () ())
    interface ITest (void DoTest ();) public class ExplicitTest: ITest (void ITest.DoTest () ())
    Visibility
    Implicit implementation has always been public, so methods and properties can be accessed directly.
    var imp = new ImplicitTest (); imp.DoTest ();
    Explicit implementations are always private.
    To get access to the implementation, you need to cast an instance of the class to the interface (upcast to interface).
    var exp = new ExplicitTest (); ((ITest) exp) .DoTest ();
    Polymorphism
    An implicit interface implementation can be virtual, which allows rewriting this implementation in descendant classes.
    Explicit implementation is always static. It cannot be overwritten (override) or overridden (new) in descendant classes. Approx. 1
    Abstract class and implementation
    An implicit implementation can be abstract and only implemented in a descendant class.
    An explicit implementation cannot be abstract, but the class itself can have other abstract methods and be abstract itself. Approx. 2

    Notes:
    Approx. 1 - As rightly noted in the comments, the implementation can be overridden by re-explicitly implementing the interface in the descendant class (see the first comment to the article).

    Approx. 2 - One of the blogs states that the class itself cannot be abstract. Perhaps this was true for some of the previous versions of the compiler, in my experiments I could easily implement the interface explicitly in an abstract class.

    Why Explicit Implementation of Interfaces is Needed

    An explicit interface implementation, according to MSDN, is necessary when multiple interfaces implemented by a class have a method with the same signature. This problem is generally known in the English-speaking world under the chilling name “deadly diamond of death”, which translates into Russian as “the diamond problem”. Here's an example of such a situation:

    / * Listing 1 * / interface IJogger (void Run ();) interface ISkier (void Run ();) public class Athlete: ISkier, IJogger (public void Run () (Console.WriteLine ("Am I an Athlete, Skier or Jogger? ");))

    By the way, this example is correct code in C #, that is, it (correctly) compiles and runs, while the Run () method is both a method of the class itself and an implementation of as many as two interfaces. Thus, we can have one implementation for different interfaces and for the class itself. You can check this with the following code:

    / * Listing 2 * / var sp = new Athlete (); sp.Run (); (sp as ISkier) .Run (); (sp as IJogger) .Run ();

    The result of executing this code will be "Am I an Athlete, Skier or Jogger?" displayed in the console three times.

    This is where we can use an explicit interface implementation to separate all three cases:

    / * Listing 3 * / public class Sportsman (public virtual void Run () (Console.WriteLine ("I am a Sportsman");)) public class Athlete: Sportsman, ISkier, IJogger (public override void Run () (Console. WriteLine ("I am an Athlete");) void ISkier.Run () (Console.WriteLine ("I am a Skier");) void IJogger.Run () (Console.WriteLine ("I am a Jogger"); ))

    In this case, when executing the code from Listing 2, we will see three lines in the console, "I am an Athlete", "I am a Skier" and "I am a Jogger".

    Pros and cons of different interface implementations

    Implementation visibility and selective implementation
    As already shown above, the implicit implementation syntactically does not differ from the usual class method (moreover, if this method was already defined in the ancestor class, then in this syntax the method will be hidden in the descendant and the code will be compiled without problems. compiler warning about method hiding.). Moreover, it is possible to selectively implement individual methods of the same interface, both explicitly and implicitly:

    / * Listing 4 * / public class Code (public void Run () (Console.WriteLine ("I am a class method");)) interface ICommand (void Run (); void Execute ();) public class CodeCommand: Code , ICommand (// implicit interface method implementation // => public implementation // implicit base class method hiding (warning here) public void Run () (base.Run ();) // explicit interface method implementation // => private implementation void ICommand.Execute () ())

    This allows implementations of individual interface methods to be used as native class methods and they are available, for example, through IntelliSense, in contrast to the explicit implementation of methods, which are private and visible only after casting to the corresponding interface.

    On the other hand, the possibility of private implementation of methods allows you to hide a number of interface methods, while fully implementing it. Going back to our very first example with arrays in .Net, you can see that the array hides, for example, the implementation of the Count property of the ICollection interface, exposing this property to the outside under the name Length (this is probably an attempt to maintain compatibility with C ++ STL and Java). This way we can hide individual methods of the implemented interface and not hide (= make public) others.

    Here, however, a problem arises that in many cases it is completely impossible to guess which interfaces are implemented by the class "implicitly", since neither the methods nor the properties of these interfaces are visible in IntelliSense (the example with System.Array is also illustrative here). The only way to detect such implementations is to use reflection, for example using the Object Browser in Visual Studio.

    Refactoring interfaces
    Since the implicit (public) implementation of the interface does not differ from the implementation of the public method of the class, in the case of refactoring the interface and removing any public method from it (for example, when combining the Run () and Execute () methods from the above ICommand interface into one Run ( )) in all implicit implementations, a method with public access will remain, which, very likely, will have to be maintained even after refactoring, since this public method may already have different dependencies in other components of the system. As a result, the programming principle "against interfaces, not implementations" will be violated, since the dependencies will already be between specific (and in different classes, probably different) implementations of the former interface method.

    / * Listing 5 * / interface IFingers (void Thumb (); void IndexFinger (); // an obsolete interface method // void MiddleFinger ();) public class HumanPalm: IFingers (public void Thumb () () public void IndexFinger ( ) () // here is a "dangling" public method public void MiddleFinger () ()) public class AntropoidHand: IFingers (void IFingers.Thumb () () void IFingers.IndexFinger () () // here the compiler error void IFingers.MiddleFinger () ())

    In the case of a private implementation of interfaces, all classes with an explicit implementation of a method that does not exist anymore will simply stop compiling, however, after removing the unnecessary implementation (or refactoring it into a new method), we will not have an “extra” public method that is not tied to any interface. Of course, you may need to refactor the dependencies on the interface itself, but here, at least, there will be no violation of the principle of "program to interfaces, not implementations".

    As for properties, the implicitly implemented interface properties (properties) allow accessing them through accessor methods (getter and setter) both from the outside and directly from the class itself, which can lead to unnecessary effects (for example, unnecessary data validation during initialization properties).

    / * Listing 6 * / interface IProperty (int Amount (get; set;)) public class ClassWithProperty: IProperty (// implicit implementation, public public int Amount (get; set;) public ClassWithProperty () (// internal invocation of the public setter Amount = 1000;)) public class ClassWithExplicitProperty: IProperty (// explicit implementation, private int IProperty.Amount (get; set;) public ClassWithExplicitProperty () (// internal invocation isn "t possible // compiler error here Amount = 1000;))

    When explicitly implementing interface properties, these properties remain private, and for access you have to go the "long" way and declare an additional private field through which initialization takes place. As a result, this leads to cleaner code where property accessors are only used for external access.

    Using Explicitly Typed Local Variables and Class Fields
    In the case of an explicit implementation of interfaces, we have to explicitly indicate that we are working not with an instance of a class, but with an instance of an interface. Thus, for example, it becomes impossible to use type inference and declare local variables in C # using the var function word. Instead, we have to use an explicit interface type declaration when declaring local variables, as well as in method signatures and class fields.

    Thus, on the one hand, we seem to make the code somewhat less flexible (for example, ReSharper, by default, always suggests using a declaration with var if possible), but we avoid potential problems associated with binding to a specific implementation as the system grows and its size code. This point may seem controversial to many, but in the case when several people are working on a project, and even in different parts of the world, using explicit typing can be very useful, since it increases the readability of the code and reduces the cost of maintaining it.