Hardware and software setup

New technologies for information transmission in networks. New technologies for information transfer

1. Introduction

The concept of telecommunications

Elements of information theory

1.3.1 Definitions of information.

1.3.2 Amount of information

1.3.3 Entropy

1.4. Messages and Signals

Theme 2 . Information networks

2.2. LAN configuration.

Topic 3.

3.2. Reference Model (OSI)

Topic 4.

4.1. Wired communication lines

4.2. Optical communication lines

Topic 5.

Topic 6..

Topic 7.

7.2. Addressing in IP networks

7.3. IP protocol

Lecture 1

Telecommunications. The concept of information. Information transmission systems. Measuring the amount of information

The concept of telecommunications

Before considering information transfer technologies, let's consider networks (systems) in which various types of information are transmitted. Information (sound, image, data, text) transmitted over telecommunications and computer networks.

Telecommunications(Greek tele - far, far and lat. communication - communication) is the transmission and reception of any information (sound, image, data, text) over a distance through various electromagnetic systems (cable and fiber optic channels, radio channels and others, wired and wireless communication channels).

Telecommunication system- set technical objects, organizational measures and subjects, realizing the processes of connection, transmission, access to information.

Telecommunication systems together with communication medium form telecommunications networks.

Telecommunication networks it is advisable to divide by type of communications (telephone communication networks, data transmission networks, etc.) and consider, if necessary, in various aspects (technical, economic, technological, technical, etc.).

Examples of telecommunications networks:

postal service;

– public telephone communication (PSTN);

– mobile telephone networks;

- telegraph communication;

- Internet - a global network of interaction of computer networks;

– wired broadcasting network;

– cable broadcasting network;

– television and radio broadcasting network;

and other information networks.

To implement communication at a distance, telecommunication systems use:

– switching systems;

– data transmission systems;

– systems for access and control of transmission channels;

– information transformation systems.

Data communication system- is a collection communication channels, centers switching, teleprocessing processors, multiplexers data transmission and software tools for establishing and implementing communication.

Under data transmission system ( SPD) is understood as the physical medium (FS), namely: the medium through which the signal propagates (for example, cable, optical fiber (optical fiber), radio, etc.).

This course of lectures is devoted to the study of information transfer technology at the physical, channel and network levels.

The most important aspect of the course is the concept of information. Currently, there is no single definition of information as a scientific term.

Here are some definitions of information:

1. Information(from lat. information- "clarification, exposition, awareness") - this information(messages, data), regardless of the form of their presentation.

2. Information- information about persons, objects, facts, events, phenomena and processes, regardless of the form of their presentation.

Information reduces uncertainty, incompleteness of knowledge about persons, objects, events, etc.

In information theory, the measure of uncertainty any experience (test) that can have different outcomes, and hence the amount of information is called entropy.

In the broad sense in which the word is often used in everyday life, entropy means a measure of disorder in a system; the less system elements subject to some order, the higher the entropy.

The more information, the more orderly the system, and vice versa, the less information, the higher system chaos, the higher her entropy.

Communication: information - message - signal

Message- is information expressed in a specific form and intended for transmission from the source to the user ( texts, photos, speech, music, television picture and etc.). Information is a part of the message that represents novelty, i.e. something that was not previously known.

Signal- this is a physical process that spreads in space and time, the parameters of which are able to display (contain) a message.

Used to transfer information signal, which is a physical quantity and information is somehow connected with its parameters.

In this way, a signal is a physical quantity that changes in a certain way. Telecommunication systems and networks use electrical, optical, electromagnetic and other types of signals.

Telephone networks

First stage development of telephone networks - public telephone networks (PSTN or PSTN). The PSTN is a collection of PBXs that are connected by analog or digital lines communications (trunks) or connecting lines, and user (terminal) equipment connected to the PBX via subscriber lines. PSTNs use circuit switching technology. The advantage of circuit switched networks is the ability to transmit audio and video information without delay. disadvantage - low channel utilization, high cost of data transmission, increased waiting time for other users.

Second phase- ISDN telephone networks. The current generation of digital telephone network- ISDN. ISDN (Integrated Services Digital Network) - Integrated Services Digital Network, in which only digital signals are transmitted over telephone channels, including subscriber lines.

As an ISDN BRI line telephone company more often uses copper cable of the public switched telephone network (PSTN), thereby reducing the final cost of the ISDN line.

Digital networks with the integration of ISDN services can be used to solve a wide class of information transfer problems in various fields, in particular: telephony; data transfer; consolidation of remote LANs; access to global computer networks (Internet); transmission of traffic sensitive to delays (video, sound); integration various kinds traffic.

An ISDN network end device can be: a digital telephone, a separate computer with an ISDN adapter installed, a file or specialized server, a LAN bridge or router, a terminal adapter with voice interfaces (for connecting a conventional analog telephone or fax), or with serial interfaces (for data transmission).

In Europe, the de facto ISDN standard is EuroISDN, which is supported by most European telecommunications providers and equipment manufacturers.

Currently connected to PSTN and ISDN networks switching centers cellular communication (cellular networks of different operators are interconnected), which provides calls from cell phones to landlines (PSTN or ISDN) and vice versa.

To connect the Internet (IP networks) with the PSTN special analog VoIP gateways, and with ISDN apply digital VoIP gateways. The voice signal from the VoIP channel can go directly to an analog phone connected to the regular PSTN telephone network or to a digital phone connected to an ISDN integrated services digital network.

As primary networks in fixed telephony, copper cable and PDH / SDH are used to combine PBXs.

cellular

Cellular communication is a wireless telecommunication system consisting of 1) a network of ground base transceiver stations, 2) small mobile stations (cellular radio phones) and 3) a cellular switch (or a mobile communications switching center). GSM (Global System for Mobile Communications)

Cellular: 1G, 2G, 2.5G, 3G, 4G, 5G. GSM (Global System for Mobile Communications)

TV networks

Television networks (terrestrial, cable, and satellite) are designed to transmit video. Cable TV uses unswitched communication channels. At first, video was in analog form, then cable and satellite television were transferred to digital signals. At present, analogue television broadcasting is ceasing to exist, and all types of television broadcasting will transmit signals in digital form.

Digital TV broadcasting is based on open standards and developed under the control of the DVB consortium.

The most widely used systems are:

· digital satellite broadcasting - DVB-S (DVB-S2);

· digital cable broadcasting - DVB-C;

· digital terrestrial broadcasting - DVB-T (DVB-T2);

digital broadcasting for mobile devices - DVB-H;

TV over IP DVB (IPTV);

· Internet TV or streaming (Internet TV).

Concerning DVB-H, DVB-IPTV and Internet-TV, then this is the result of integration (convergence) of various networks, as well as terminal devices.

Mobile TV DVB-H is a mobile broadcasting technology that transmits digital video over the Internet to mobile devices such as PDAs, mobile phone or portable TV.

It is important to note that IPTV (IP over DVB or IP over MPEG) is not television that broadcasts over the Internet. IPTV resembles ordinary cable television, only it comes to the subscriber's terminal not via a coaxial cable, but via the same channel as the Internet (ADSL modem or Ethernet).

IPTV is a broadcast of channels (usually received from satellites), mainly in MPEG2 / MPEG4 formats via transport network provider, followed by viewing on a computer using one of the video players - VLC-player or IPTV - Player or on a TV using a special specialized Set Top Box device.

Video streaming ( Internet TV). The broadcasting model in Internet-TV differs significantly from other concepts. Streaming Video refers to data compression and buffering technologies that allow real-time video transmission over the Internet.

Computer networks

Primary networks

Currently, the Internet uses almost all known communication lines from low-speed telephone lines to high-speed digital satellite channels.

Communication channels of global networks are organized by primary networks of FDM, PDH/SDH, DWDM technologies(DiDouble Diem).

Since IP traffic today is an indispensable attribute of any data transmission network and it is simply impossible not to support it, in order to provide quality services most large global networks, especially networks of telecom operators, are built on a four-layer scheme.

Rice. 10. Four-level structure of the modern global network

The two lower levels do not belong to the actual packet networks - these are the levels of the primary network.

Primary, or core, networks are designed to create a switched infrastructure. Based on the channels formed by the primary networks, the secondary networks ( computer or telephone) networks.

At the lower level, Dense Wavelength Division Multiplexing, the fastest Dense Wavelength Division Multiplexing (Dense Wavelength Division Multiplexing) DWDM technology to date, forms the spectral speeds 10 Gbps and higher. Wavelength Division Multiplexing ( WDM) - optical wavelength division multiplexing technology, usually called wavelength division multiplexing. Almost any equipment can be connected to the WDM (DWDM, CWDM) multiplexer: SONET/SDH, ATM, Ethernet.

At the next level, SDH technology works ( synchronous digital hierarchy). The SDH / PDH standards were developed for high-speed optical communication networks - first PDH (Plesiochronous Digital Hierarchy, plesiochronous digital hierarchy), and then the more advanced SDH (Synchronous Digital Hierarchy, synchronous digital hierarchy), common in Europe, and its American counterpart SONET. SONET/SDH involves the use time multiplexing method and synchronization of traffic time intervals between network elements and determines data rate levels and physical parameters.

The third level is formed by the ATM network, the main purpose of which is to create an infrastructure of permanent virtual channels connecting the interfaces of IP routers operating on the third, upper level of the global network.

The IP layer forms the composite network and provides services to end users who transmit their IP traffic over the WAN in transit or interact via IP with the Internet.

The Internet also uses "pure" IP networks, so called because there is no other packet-switched network, such as ATM, below the IP layer.

The structure of a "pure" IP network is shown in fig. below.

Rice. 11. Structure of a "pure" IP network

In such a network digital channels are still formed by the infrastructure of the two lower layers, and these links are directly used by the interfaces of IP routers, without any intermediate layer.

The development of communication networks has shown the need to integrate sound, images and other types of data in order to be able to transmit them together. Since discrete communication channels are more reliable and more economical than analog communication channels, they were taken as the basis. In this regard, the number of analog networks is rapidly declining and they are being replaced by discrete ones.

softswitch

Softswitch (soft switch) is a flexible soft switch, one of the main elements of the control level of the next generation communication network NGN

Rice. 15. Softswitch as part of the Public Communications Network

Softswitch is a control device network NGN, designed to separate connection management functions from switching functions, capable of serving a large number of subscribers and interacting with application servers, supporting open standards. SoftSwitch is the bearer of IP network intelligence, it coordinates call service control, signaling, and connectivity features across one or more networks.

Also an important function of the softswitch is the connection of next-generation NGN networks with existing traditional PSTN networks, via signaling (SG) and media gateways (MG).

Information transfer technologies

Topic 1. Basic concepts of information and information transmission systems

1. Introduction

The concept of telecommunications

Elements of information theory

1.3.1 Definitions of information.

1.3.2 Amount of information

1.3.3 Entropy

1.4. Messages and Signals

1.5. The main directions of development of telecommunication technologies

Theme 2 . Information networks

2.1. Characteristics and classification of information networks

2.2. LAN configuration.

2.3. Basic network topologies

2.4. Network technologies local networks

2.5. Ways to build information networks

Topic 3. Information network architectures

3.1. Multilayer architecture of information networks

3.2. Reference Model (OSI)

Topic 4. Communication lines and data channels

4.1. Wired communication lines

4.2. Optical communication lines

4.3. Wireless communication channels

4.4. Satellite data channels

Topic 5. Data transfer technologies at the physical layer

5.1 Basic functions of the physical layer

5.2. Conversion methods discrete signals(modulation and coding):

5.2.1. Analog modulation of discrete signals (AM, FM, FM)

5.2.2. Digital coding of discrete signals (pulse and potential)

5.3. PCM analog signals

5.4. Multiplexing methods:

5.4.1. FDM method

5.4.2. Time Division Multiplexing TDM

5.4.3. By WDM wavelength (in fiber optic communication channels)

Topic 6. Data transfer technologies at the data link layer.

6.1. Data transfer technologies at the data link level in LAN and leased lines (Ethernet, Token Ring, FDDI; SLIP, HDLC, PPP)

6.2. WAN link layer or backbone transport technologies (X.25, Frame Relay, ATM, MPLS, Ethernet; ISDN, PDH, SDH/SONET, WDM/DWDM)

Topic 7. Information transfer technologies at the network level in composite networks (IP networks)

7.1. Networking based on the network layer

7.2. Addressing in IP networks

7.3. IP protocol

7.4. Routing in data networks.

7.5. Data flow management.

The course curriculum of 108 academic hours consists of one substantive (educational) module of 3 credits (the volume of an ECTS credit is 36 academic hours) and consists of classroom studies and independent work of students.

When reviewing information transfer technologies, one cannot fail to mention the OSI model, a model that describes the structure of an ideal network architecture. Each interface and transmission protocol, which will be discussed in this graduation project, occupies its own specific level in this model.

    1. axis model

In order for the various components of the network to communicate, they must work using the same information exchange protocol, that is, they must “speak” the same language. The protocol defines a set of rules for organizing the exchange of information at all levels of interaction of network objects. The OSI (Open System Interconnect) model developed by the International Standardization Organization (ISO) is used as a "ruler" for defining levels. The OSI model has seven levels of interaction to consider the process of exchanging information between devices on a network. Each of the network layers is relatively autonomous and is considered separately. The OSI model is used to define the functions of each layer. This model contains essentially 2 different models:

    a horizontal protocol-based model that provides a mechanism for the interaction of programs and processes on different machines;

    a vertical model based on services provided by neighboring layers to each other on the same machine.

Figure 1.1.1 OSI model

Physical layer (physical layer) - the lower level of the model, which defines the method of transferring data, represented in binary form, from one device (computer) to another. The transmission of electrical or optical signals to a cable or radio air is carried out in accordance with the methods of encoding digital signals. The physical layer specifications define voltage levels, voltage timing, physical information transfer rates, maximum information transfer distances, media requirements, physical connectors, and other similar characteristics.

Physical layer functions are implemented on all devices connected to the network. On the computer side, the functions of the physical layer are performed by a network adapter that provides a mechanical interface for connecting the computer to a transmission medium or serial port. The physical layer defines such types of data transmission media as fiber optic, twisted pair, coaxial cable, satellite data link, etc.

The standard types of network interfaces related to the physical layer are: USB, RS-232, RS-485, RJ-45, Ethernet physical interfaces (10BASE-T, 100BASE-T and 1000BASE-TX). Basic physical layer protocols: IEEE 802.15 (bluetooth), EIA RS-232, RS-485, DSL(digital subscriber line), ISDN (integrated services digital network), 802.11 Wi-Fi, GSM, RFID, 802.15.4.

The data link layer provides reliable data transit through the physical channel. It packs the data received from the physical layer, represented in bits, into frames, checks them for integrity and, if necessary, corrects errors (forms a repeated request for a damaged frame) and sends it to the network layer. In carrying out this task, the link layer deals with issues of physical addressing, network topology, fault notification, in-order delivery of data blocks, and information flow control. Usually this layer is divided into two sub-layers: LLC (Logical Link Control) in the upper half, which performs error checking and network layer maintenance, and MAC (Media Access Control) in the lower half, which is responsible for physical addressing and receiving / transmitting packets at the physical layer . Switches, bridges and other devices work at this level, they are called second-level devices.

Link layer protocols: Controller Area Network (CAN), IEEE 802.3 Ethernet, Fiber Distributed Data Interface (FDDI), Frame Relay, IEEE 802.11 wireless LAN, 802.15.4, Point-to-Point Protocol (PPP), Token ring, x. 25, ATM.

In programming, this level represents the network card driver; in operating systems, there is a software interface for the interaction of the channel and network levels with each other. This is not a new level, but simply an implementation of the model for a specific OS. Examples of such interfaces: ODI, NDIS, UDI.

The network layer (session layer) provides connection and route selection between two end systems connected to different "subnets", which may be located in different geographical locations. The network layer is responsible for translating logical addresses and names into physical ones, determining the shortest routes, switching and routing, tracking problems and "congestion" in the network. Network layer protocols route data from a source to a destination. Devices (routers) operating at this level are conditionally called devices of the third level (according to the level number in the OSI model).

Network layer protocols: IP/IPv4/IPv6 (Internet Protocol), IPX (Internetwork Packet Exchange), X.25 (partially implemented at layer 2), IPsec (Internet Protocol Security). Routing protocols - RIP (Routing Information Protocol), OSPF (Open Shortest Path First).

The transport layer (transport layer) - the highest of the layers responsible for transporting data, is designed to ensure reliable data transfer from the sender to the recipient. At the same time, the level of reliability can vary over a wide range. There are many classes of transport layer protocols, ranging from protocols that provide only basic transport functions (for example, data transfer functions without acknowledgment), to protocols that ensure that multiple data packets are delivered to the destination in the correct sequence, multiplex multiple data streams, provide data flow control mechanism and guarantee the validity of the received data.

For example, UDP is limited to data integrity control within a single datagram and does not exclude the possibility of losing the entire packet, or duplicating packets, or violating the order in which data packets were received. He adds two fields to the IP packet header, one of which, the "port" field, provides information multiplexing between different application processes, and the other field - "checksum" - allows maintaining data integrity.

Examples of network applications using UDP are NFS and SNMP.

TCP provides reliable continuous data transmission, excluding data loss or violation of the order of their arrival or duplication, it can redistribute data by breaking large portions of data into fragments and vice versa gluing fragments into one packet.

The main transport layer protocols are: SPX (Sequenced Packet Exchange), TCP (Transmission Control Protocol), UDP (User Datagram Protocol).

The session layer synchronizes the conversation between presentation layer objects and manages the creation/termination of a session, the exchange of information, the determination of the right to transfer data, and the maintenance of the session during periods of application inactivity. Sessions consist of a dialogue between two or more view objects. As an example of software tools that ensure the operation of the session layer, the NetBIOS interfaces of Windows networks and Sockets - sockets of TCP / IP networks can serve.

The presentation layer is responsible for ensuring that information sent from the application layer of one system is readable by the application layer of another system. If necessary, the presentation layer translates between a plurality of information presentation formats by using a common information presentation format. If necessary, not only the actual data is transformed, but also the data structures used by the programs. The presentation layer is responsible for allowing dialogue between applications on different machines. This layer provides data transformation (coding, compression, etc.) of the application layer into an information stream for the transport layer. Presentation layer protocols are typically part of the functionality of the top three layers of the model.

Application layer (application layer) - the top layer of the OSI model, which ensures the interaction of user applications with the network:

    allows applications to use network services:

    • remote access to files and databases,

      email forwarding;

    responsible for the transfer of service information;

    provides applications with error information;

    generates requests to the presentation layer.

Application layer protocols: HTTP, SMTP, SNMP, POP3, FTP, TELNET and others,.

Studying the structure of this model allows you to create a clearer picture of the location of each network technology in a complex networking system.

      Object identification systems

The idea of ​​automated object recognition itself is not new. At least five types of identification are known:

    optical: systems based on barcodes, character recognition;

    magnetic: magnetic stripe, recognition of marks applied by magnetic media;

    radio frequency identification (RFID) and data transmission: plastic smart cards with an integrated microchip, radio tags;

    biometric: recognition fingerprint, scanning the pattern of the iris of the eye;

    acoustic: identification by sound parameters (voice).

        Optical identification

Optical identification is the principle of selecting individual components of a system among many similar ones using a point source of optical radiation in the visible wavelength range.

Optical identification is often used on railways. Video analytical equipment provides automated control of the railway track, adjacent territory (right of way) and other infrastructure facilities using technical means video surveillance.

The equipment solves the following tasks:

    registration, transmission and analytical processing of video information about the situation at protected facilities;

    automatic formation of an operational alarm signal in the event of an emergency (alarming) situation;

    continuous monitoring of the performance of all components of the complex and automatic detection of unauthorized changes to its settings.

Video analytical processing algorithms built into the equipment should provide:

    automatic detection, tracking and classification of targets on the approaches to the railway track and other infrastructure facilities;

    classification of goals by types of behavior, including: appearance in a given zone;

    image quality control and automatic generation of an alarm message in case of significant quality degradation.

In addition, optical identification is used to control the movement of rolling stock of railway transport (RT) by automatically detecting and identifying cars, tanks and platforms by their registration number.

The camera is installed on a rack, at a height of up to 6 meters and is directed along the railway track. The objects of video analysis are people and vehicles moving randomly in the field of view of the camera. The equipment supports various profiles of the ONVIF (Open Network Video Interface Forum) standard. ONVIF is an industry standard that defines protocols for the interaction of devices such as IP cameras, DVRs, and video management systems.

The disadvantage of optical identification is the potential for contamination of cameras located in complex areas, the effect of interference on image quality and, consequently, identification, the rather high cost of such systems (sets of cameras and image analyzers).

        RFID

RFID (Radio Frequency IDentification) - radio frequency identification, a method of automatic identification of objects in which data stored in so-called transponders, or RFID tags, are read or written using radio signals. Any RFID system includes the following components:

    reading device (reader, reader or interrogator);

    transponder (RFID tag).

Most RFID tags come in two parts. The first is an integrated circuit (IC) for storing and processing information, modulating and demodulating a radio frequency (RF) signal, and some other functions. The second is an antenna for receiving and transmitting a signal.

Figure 1.2.2.1 RFID Antenna

There are several ways to organize RFID tags and systems:

    By operating frequency

    • LF band marks (125-134 kHz). Passive systems in this range are low priced and due to their physical characteristics are used for subcutaneous tags in animal, human and fish microchipping. However, due to the wavelength, there are problems with long distance reading as well as problems with reading collisions.

      HF band tags (13.56 MHz). The advantages of these systems are that they are cheap, have no environmental and licensing problems, are well standardized, and have a wide range of solutions. They are used in payment systems, logistics, personal identification. For a frequency of 13.56 MHz, the ISO 14443 standard (types A / B) was developed. However, there are problems with reading over long distances, in conditions of high humidity, in the presence of metal, as well as problems associated with the appearance of collisions during reading.

      UHF band tags (UHF, 860-960 MHz). Labels of this range have the greatest range of registration, in many standards of this range there are anti-collision mechanisms. In UHF RFID systems, compared to LF and HF, the cost of tags is lower, while the cost of other equipment is higher. The UHF frequency band is currently open for free use in Russian Federation in the so-called "European" band - 863-868 MHz and in the "American" band ____.

    By power source

    • Passive

      Active

      semi-passive

    By type of memory

    • RO (Read Only) - contain only the identifier. Data is written only once during production

      WORM (Write Once Read Many) - contain an identifier and a block of write-once memory

      RW (Read and Write) - contain an identifier and a memory block for multiple recording of information. The data in them can be overwritten multiple times.

    Reading distance

      Near identification (reading at a distance up to 20 cm)

      Medium range identification (from 20 cm to 10 m)

      Long range identification (5m to 300m)

    By execution

Passive RFID tags do not have a built-in energy source. The electric current induced in the antenna by the electromagnetic signal from the reader provides enough power for the operation of the silicon chip located in the tag and the transmission of the response signal. In practice, the maximum reading distance of passive tags varies from 10 cm (4 inches) (according to ISO 14443) to several meters (EPC and ISO 18000-6), depending on the selected frequency and antenna size. Passive tags (860-960 MHz) transmit a signal by modulating the reflected carrier signal (backscatter modulation). The reader antenna emits a carrier frequency signal and receives the modulated signal reflected from the tag.

Active RFID tags have their own power supply and do not depend on the energy of the reader, as a result of which they are read at a long distance (up to 300 meters), are larger and can be equipped with additional electronics. However, these tags are the most expensive and the batteries have a limited run time. Active tags are in most cases more reliable and provide the highest reading accuracy at the maximum distance. Active tags, having their own power supply, can also generate an output signal of a higher level than passive tags, allowing them to be used in environments that are more aggressive for the RF signal: water, air.

Semi-passive RFID tags, also called semi-active tags, are very similar to passive tags but have a battery that powers the chip. At the same time, the range of these tags depends only on the sensitivity of the reader's receiver and they can function at a greater distance and with better characteristics.

Information readers are devices that read information from tags and write data to them. These devices can be permanently connected to the accounting system, or work autonomously. Readers are divided into stationary and mobile.

Figure 1.2.2.2 RFID Reader

International RFID standards, as an integral part of automatic identification technology, are developed and adopted by the international organization ISO together with IEC.

The division of tags into classes was accepted long before the emergence of the EPCglobal initiative to streamline a large number of RFID protocols, but there was no generally accepted protocol for the exchange between readers and tags. This led to incompatibility between readers and tags from different manufacturers. In 2004, ISO / IEC adopted a single international standard ISO 18000, which describes the exchange protocols (radio interfaces) in all RFID frequency ranges from 135 kHz to 2.45 GHz. The UHF range (860-960) MHz corresponds to the ISO 18000-6A / B standard. In 2004, EPCglobal specialists created a new protocol for the exchange between a reader and a UHF tag - Class 1 Generation 2. In 2006, the EPC Gen2 proposal, with minor changes, was adopted by ISO / IEC as an addition WITH to the existing versions A and B of the ISO 18000-6 standard, and the ISO/IEC 18000-6C standard is currently the most widely used RFID technology standard in the UHF band.

The disadvantages of RFID are:

    the performance of the tag is lost in case of partial mechanical damage;

    susceptibility to interference in the form of electromagnetic fields;

    insufficient openness of the developed standards.

In this section, the main technologies for identifying objects were considered. Among them, special attention was paid to RFID and optical identification, which can be used to initiate the connection of a fixed control room with a train traffic recorder (TRDR).

      Wireless technologies

To carry out the process of information exchange between PC and RPDP, it was decided to study the existing technologies for wireless data transmission in order to subsequently select the most appropriate one.

        bluetooth

BlueTooth technology (IEEE 802.15 standard) was the first technology to organize a wireless personal data network (WPAN - Wireless Personal Network). It allows the transmission of data and voice over a radio channel over short distances (10–100 m) in the unlicensed 2.4 GHz frequency band and connects PCs, mobile phones and other devices in the absence of direct visibility. During creation, the main goal was to develop a radio interface with low power consumption and low cost, which would allow communication between cell phones and wireless headsets.

BlueTooth Wireless Data Transfer Protocol Stack:

Figure 1.3.1.1 Bluetooth protocol stack

BlueTooth technology supports both point-to-point and point-to-multipoint connections. Two or more devices using the same channel form a piconet. One of the devices works as a master (master), and the rest - as slaves (slave). There can be up to seven active slaves in one piconet, with the remaining slaves in the "parked" state, remaining synchronized with the master. Interacting piconets form a "distributed network" (scatternet). Each piconet has only one master device, but slave devices can be part of different piconets. In addition, the master device of one piconet may be a slave device in another.

In most cases, BlueTooth technology is used by developers to replace a wired serial connection between two devices with a wireless one. To simplify the task of establishing a connection and performing data transfer, a version of the firmware for BlueTooth modules was developed, which represents a complete software implementation of the entire BlueTooth protocol stack (Fig. 1), as well as SPP (Serial Port Profile) and SDP (Service Discovery Profile) profiles. This solution allows the developer to control the module, establish a wireless serial connection and perform data transfer using special character commands. However, it imposes certain restrictions on the use of the capabilities of BlueTooth technology. This mainly affects the decrease in the maximum throughput and the number of simultaneous asynchronous connections supported by the BlueTooth module.

In mid-2004, the BlueTooth specification version 1.1, which was published in 2001, was replaced by the BlueTooth specification version 1.2. The main differences between specification 1.2 and 1.1 include:

    Implementation of the technology of adaptive frequency hopping of the channel to avoid collisions (Adaptive Friquency hopping, AFH).

    Reducing the time it takes to establish a connection between two BlueTooth modules.

BlueTooth and Wi-Fi are known to use the same unlicensed 2.4GHz band. Therefore, in cases where BlueTooth devices are within the range of Wi-Fi devices and communicate with each other, this can lead to collisions and affect the performance of the devices. AFH technology avoids collisions: during the exchange of information to combat interference, BlueTooth technology uses channel frequency hopping, which does not take into account the frequency channels on which Wi-Fi devices communicate.

BlueTooth technology development scheme developed by the SIG consortium is designed by:

Figure 1.3.1.2 Development stages of Bluetooth technology

Currently, there are a large number of companies on the market offering BlueTooth modules, as well as components for independent implementation of the BlueTooth device hardware. Virtually all manufacturers offer modules that support BlueTooth specifications version 1.1 and 1.2 and are class 2 (range 10 m) and class 1 (range 100 m). However, although version 1.1 is fully compatible with version 1.2, all of the enhancements introduced in version 1.2 discussed above can only be obtained if both devices are 1.2 compliant.

In November 2004, the BlueTooth specification version 2.0 was adopted, supporting Enhanced Data Rate (EDR) technology. Specification 2.0 with EDR support allows data exchange at speeds up to 3 Mbps. The first mass-produced samples of modules corresponding to version 2.0 and supporting EDR technology were offered by manufacturers at the end of 2005. The range of such modules is 10 m in the absence of line of sight, which corresponds to class 2, and in the presence of line of sight it can reach 30 m.

As noted earlier, the main purpose of BlueTooth technology is to replace a wired serial connection. BlueTooth technology defines the following profiles: LAN profile (Lan Access Profile), data exchange profile (Generic Object Exchange), data transfer profile (Profile Object Push Profile), file exchange profile (File Transfer Profile), synchronization profile (Synchronization Profile).

For functioning wireless network WiFi uses radio waves, just like cell phones, TVs, and radios. The exchange of information over a wireless network is in many ways similar to communication using radio communications.

Most Wi-Fi equipment can be divided into two large groups:

    WiFi routers (routers) and access points

    terminal equipment of users equipped with Wi-Fi adapters.

The computer's wireless adapter converts data into a radio signal and transmits it over the air using an antenna. The wireless router receives and decodes this signal. Information from the router is sent to the Internet over a wired Ethernet cable.

In fact, both WiFi routers and WiFi access points perform the same functions - they create radio coverage (AP mode), being in which, any device equipped with an adapter can connect to the network in AP-Client mode. This is where the device similarities end. These devices differ both visually and structurally. A classic WiFi hotspot has only one Ethernet port. Classic WiFi routers there are 5 of them. At the same time, the WAN port is separately allocated, which is used to connect the provider's cable. The remaining Ethernet ports are labeled as LAN - they are used to connect over a twisted pair of local network clients that the router creates.

In the factory settings, the access point has a disabled DHCP server, and in order to connect to it via Ethernet or WiFi, the network adapter must be assigned a static IP address. For routers, the DHCP server is enabled in the factory settings, and any client of the router can receive an IP address from this server automatically. To do this, you need to configure the DHCP client service of the adapter, which is used to connect to the router, to automatically obtain IP addresses. In addition to the DHCP server enabled in the factory settings, the routers are equipped with a hardware and software firewall that minimizes the likelihood of hacker attacks and theft of confidential information from clients of the local network that it creates, but does not guarantee 100% protection.

Usually scheme WiFi networks contains at least one access point and at least one client. The access point transmits its network identifier (SSID) using special signaling packets at a rate of 0.1 Mbps every 100 ms. Knowing the SSID of the network, the client can find out if it is possible to connect to this access point. When two access points with identical SSIDs enter the coverage area, the receiver can choose between them based on signal strength data.

When using Wi-Fi equipment, several main modes of operation can be distinguished: point-to-point, infrastructure mode, operation in bridge mode and repeater mode. Let's take a closer look at each of these modes of operation.

In the point-to-point mode, wireless clients are connected directly to each other, access points are not used in this case. This mode can be used, for example, to connect two computers equipped with Wi-Fi adapters to each other without any additional devices.

Figure 1.3.2.1 Point-to-Point Connection

In infrastructure mode (point-to-multipoint) of operation, all devices connected to a wireless network communicate with each other through an intermediate device called an access point (AP, access point).

Figure 1.3.2.2 Infrastructure mode of operation

The wireless bridge mode is used when it is necessary to connect two wired local networks that are a short distance from each other (20-250 m), but there is no way to lay cables. In this case, wireless clients cannot connect to access points, and the access points themselves are used only to transit traffic from one local wired network to another.

Used for WiFi work adapters (transceivers, transceivers) are very similar to those used in duplex portable radios, cell phones, and other similar devices. They can transmit and receive radio waves, as well as convert the ones and zeros of a digital signal into radio waves and vice versa. At the same time, there are some notable differences between WiFi receivers and transmitters and other similar devices. The most significant difference is that they work on other frequency ranges. Most modern laptops and many desktop computers are sold with built-in wireless transceivers. If a laptop does not have such a device, there are adapters that connect to an expansion slot for PC cards or a USB port. After you install the wireless adapter and the appropriate drivers to enable the adapter to function properly, your computer may automatically search for available networks.

WiFi transceivers can operate in one of three frequency bands. It is also possible that there is a quick "jump" from one range to another. This technique allows you to reduce the effect of interference and simultaneously use the wireless communication capabilities of many devices. Most current WiFi technology standards use the 2.4GHz frequency band, or more specifically, the 2400MHz-2483.5MHz frequency band. In addition to the 2.4GHz frequency range, current current WiFi standards use the 5GHz range in the frequency bands 5.180-5.240GHz and 5.745-5.825GHz. These frequencies are much higher than those used in cell phones, duplex portable radios and broadcast television. At a higher frequency, more data can be transmitted.

WiFi uses 802.11 networking standards in several flavors:

    According to the 802.11a standard, data is transmitted in the 5 GHz band at speeds up to 54 megabits per second. It also provides for orthogonal frequency-division multiplexing OFDM, a more efficient coding technique that splits the original signal at the transmitter side into multiple sub-signals. This approach reduces the impact of interference.

    802.11b is the slowest and least expensive standard. For a while, due to its cost, it became widespread, but now it is being replaced by more fast standards as they become cheaper. The 802.11b standard is designed to operate in the 2.4 GHz band. The data transfer rate is up to 11 megabits per second when used to increase the speed of manipulation with complementary code (complementary code keying, CCK).

    The 802.11g standard, like 802.11b, provides for operation in the 2.4 GHz band, but provides a significantly higher data transfer rate - up to 54 megabits per second. 802.11g is faster because it uses the same OFDM encoding as 802.11a.

    The newest standard is 802.11n. It significantly increased the data transfer rate and extended the frequency range. At the same time, although the 802.11g standard is theoretically capable of providing data transfer rates of 54 megabits per second, the actual speed is approximately 24 megabits per second, due to network congestion. The 802.11n standard can provide data transfer rates of 140 megabits per second. The standard was approved on September 11, 2009 by the Institute of Electrical and Electronics Engineers (IEEE), a world leader in the development and implementation of new standards.

The most common wireless networking standards today are IEEE 802.11 b and 802.11 g. The equipment of such networks, according to IEEE, operates in the range of 2400-2483.5 MHz and is capable of transmitting data at a maximum speed of 11 and 54 Mbps, respectively.

The distribution of waves in the considered range has a number of original qualities. Despite the functional similarity of wireless and wired equipment, the difference in their installation, installation and configuration is considerable. The reason lies in the properties of the physical media used to transmit information. In the case of wireless equipment, the laws of radio wave propagation must be taken into account. Radio is more sensitive to various kinds of interference. Therefore, the presence of partitions, walls and reinforced concrete floors can affect the data transfer rate. The conditions for receiving and transmitting a radio signal worsen not only physical obstacles, but also various radio-emitting devices also create interference.

At one time, the standard for security in regional communication networks was Wired Equivalency Privacy (WEP) technology. However, hackers have discovered WEP vulnerabilities, and now it's easy enough to find applications and programs designed to hack networks with such protection. WEP is based on the RC4 stream cipher, chosen for its high speed and variable key length. CRC32 is used to calculate checksums.

WPA has replaced WEP wireless security technology. The advantages of WPA are enhanced data security and tighter control of access to wireless networks. Today, a wireless network is considered secure if it has three main components of the security system: user authentication, confidentiality and integrity of data transmission. WiFi Protected Access (WPA) is currently part of the 802.11i wireless network security protocol. This technology supports the basic authentication of 802.1x protocols, such as the Extensible Authentication Protocol (EAP), which involves three parties in authentication - the caller (client), called (access point) and the authentication server, which significantly increases the security of the connection. In addition, WPA ensures the confidentiality of data transmission by encrypting traffic using temporary keys using TKIP and the integrity of information by verifying the MIC (Message Integrity Check) checksum. As with WEP, WPA allows you to log in with a password. Most public hotspots are either open or use WPA or 128-bit WEP, although some still use the old, vulnerable WEP system. WPA and WPA2 are currently being developed and promoted by the Wi-Fi Alliance.

To provide even greater security, Media Access Control (MAC) address filtering is sometimes used. It does not use a password to identify users, it uses the physical hardware of the computer. Each computer has its own unique MAC address. MAC address filtering ensures that only machines with specific MAC addresses can access the network. When configuring the router, you need to specify which addresses are allowed to access the network. The system is not 100% reliable. A hacker with the right level of knowledge can spoof a MAC address, that is, copy a known allowed MAC address and trick the system into imitating that address with their computer, allowing them to enter the network.

Benefits of Wi-Fi

    Allows you to deploy a network without running a cable, which can reduce the cost of deploying and / or expanding the network. Locations where cable cannot be installed, such as outdoors and in historic buildings, can be served by wireless networks.

    Allows mobile devices to access the network.

    Wi-Fi devices are widespread in the market. Hardware compatibility is guaranteed through mandatory Wi-Fi logo hardware certification.

    Within the Wi-Fi zone, several users can access the Internet from computers, laptops, phones, etc.

    radiation from WiFi devices at the time of data transmission, an order of magnitude (10 times) less than that of a cell phone.

ZigBee wireless data transmission technology was introduced to the market after the advent of BlueTooth and Wi-Fi wireless data transmission technologies. The emergence of ZigBee technology is primarily due to the fact that for some applications (for example, for remote control of lighting or garage doors, or reading information from sensors), the main criteria for choosing a wireless transmission technology are low power consumption of the hardware and its low cost. This implies a low throughput, since in most cases the sensors are powered by a built-in battery, the operating time of which should exceed several months and even years. The existing BlueTooth and Wi-Fi wireless data transmission technologies at that time did not meet these criteria, providing data transmission at high speeds, with a high level of power consumption and hardware cost. In 2001, IEEE 802.15 Working Group No. 4 began work on creating a new standard that would meet the following requirements:

    very low power consumption of the hardware that implements the technology of wireless data transmission (battery life should be from several months to several years);

    the transfer of information should be carried out at a low speed;

    low hardware cost.

The result was the development of the IEEE 802.15.4 standard. On fig. Figure 5 shows the interaction model of the IEEE 802.15.4 standard, ZigBee wireless data transmission technology and the end user.

Figure 1.3.3.1 Interaction model of IEEE 802.15.4 standard, ZigBee wireless data transmission technology and end user

The IEEE 802.15.4 standard defines the interaction of only the two lowest layers of the interaction model: the physical layer (PHY) and the radio access control layer for three unlicensed frequency bands: 2.4 GHz, 868 MHz and 915 MHz.

The MAC layer is responsible for controlling access to the radio channel using the Carrier Sense Multiple Access with Collision Avoidance (CSMA-CA) method, as well as managing connection and disconnection from the data network and ensuring the protection of transmitted information by symmetrical key (AES-128).

In turn, the ZigBee wireless data transmission technology proposed by the ZigBee alliance defines the remaining levels of the interaction model, which include the network layer, the security layer, the application structure layer, and the application profile layer. The network layer, ZigBee wireless data transmission technology, is responsible for device discovery and network configuration, and supports three network topology options.

To ensure the low cost of integrating ZigBee wireless transmission technology into various applications, the hardware implementation of the IEEE 802.15.4 standard comes in two forms: Restricted Feature Devices (RFDs) and Fully Functional Devices (FFDs).

In addition to dividing devices into RFD and FFD, the ZigBee Alliance defines three types of logical devices: ZigBee coordinator (coordinator), ZigBee router, and ZigBee terminal device. The coordinator performs network initialization, node management, and also stores information about the settings of each node connected to the network. The ZigBee router is responsible for routing messages sent over the network from one node to another. Terminal device refers to any terminal device connected to the network. The RFD and FFD devices discussed above are precisely the terminal devices. The type of logical device when building a network is determined by the end user by selecting a specific profile proposed by the ZigBee alliance. When building a network with an "each with each" topology, the transmission of messages from one network node to another can be carried out along different routes, which allows you to build distributed networks (combining several small networks into one large one - a cluster tree) with the installation of one node from another on a sufficiently large distance and ensure reliable delivery of messages.

The traffic transmitted over the ZigBee network, as a rule, is divided into periodic, intermittent and repetitive (characterized by a small time interval between sending information messages).

Periodic traffic is typical for applications that need to receive information remotely, such as from wireless sensors or meters. In such applications, obtaining information from sensors or meters is carried out as follows. As mentioned earlier, any terminal device, which in this example is a wireless sensor, should be in sleep mode for the vast majority of its operation time, thereby ensuring very low power consumption. To transmit information, the terminal device wakes up at certain points in time and searches the air for a special signal (beacon) transmitted by the network management device (ZigBee coordinator or ZigBee router) to which the wireless meter is connected. If there is a special signal (beacon) on the air, the terminal device transmits information to the network control device and immediately goes into the “sleep” mode until the next communication session.

Intermittent traffic is typical, for example, for remote lighting control devices. Imagine a situation when it is necessary, when a motion sensor installed at the front door is triggered, to send a command to turn on the lighting in the hallway. The transmission of the command in this case is carried out as follows. When the network manager receives a signal that the motion sensor has been triggered, it issues a command to the terminal device (wireless switch) to connect to the ZigBee wireless network. Then a connection is established with the terminal device (wireless switch) and an information message is transmitted containing a command to turn on the lighting. After receiving the command, the connection is disconnected and the wireless switch is disconnected from the ZigBee network. Connecting and disconnecting the terminal device to the ZigBee network only at the moments necessary for this allows you to significantly increase the time that the terminal device stays in the "sleep" mode, thereby ensuring minimal power consumption. The method of using a special signal (beacon) is much more energy intensive.

In some applications, such as security systems, the transmission of information about the operation of sensors must be carried out almost instantly and without delay. But we must take into account the fact that at a certain point in time several sensors can “trigger” at once, generating so-called repetitive traffic in the network. The probability of this event is low, but it is unacceptable not to take it into account in security systems. In the ZigBee wireless network, for messages transmitted to the wireless network when several security sensors (end devices) are triggered at once, data transmission from each sensor is provided in a specially allocated time slot. In ZigBee technology, a dedicated time slot is called a Guaranteed Time Slot (GTS). The presence in ZigBee technology of the ability to provide a guaranteed time slot for the transmission of urgent messages allows us to talk about the implementation of the QoS method (quality of service) in ZigBee. The allocation of a guaranteed time slot for the transmission of urgent messages is carried out by the network coordinator (Fig. 6, PAN Coordinator).

To build a wireless network (for example, a network with a star topology) based on ZigBee technology, a developer needs to purchase at least one network coordinator and the required number of end devices. When planning the network, keep in mind that the maximum number of active end devices connected to the network coordinator should not exceed 240. In addition, software tools for developing, configuring the network and creating custom applications and profiles must be purchased from the ZigBee chip manufacturer.

The high cost of the debug kit, which includes a set of software and hardware for building ZigBee wireless networks of any complexity, is one of the limiting factors for the mass distribution of ZigBee technology in the Russian market.

The brief overview of BlueTooth, Wi-Fi and ZigBee wireless data transmission technologies given in the section shows that each technology has its own distinctive qualities, which consist in achieving the same goal in different ways (with different losses). Comparative characteristics of BlueTooth, Wi-Fi and ZigBee technologies are shown in the table.

Table 1.3.3.1

Comparative characteristics of BlueTooth, Wi-Fi and ZigBee technologies

This table shows that the fastest and longest transmission is possible when using Wi-Fi technology. Wi-Fi technology is used to send mail, video, and other data over the Internet. ZigBee technology is perfect for low-speed, small-sized information exchange between a large number of nodes, for remote monitoring and control. BlueTooth technology has found its greatest application in the exchange of data between mobile devices.

Network technology is an agreed set of standard protocols and the software and hardware that implements them (for example, network adapters, drivers, cables and connectors) sufficient to build a computer network. The epithet "sufficient" emphasizes the fact that this set is the minimum set of tools with which you can build a workable network.

The protocols on the basis of which a network of a certain technology is built (in the narrow sense) were specially developed for joint work, therefore, the network developer does not require additional efforts to organize their interaction. Sometimes network technologies are called basic technologies, meaning that the basis of any network is built on their basis. Examples of core networking technologies include well-known LAN technologies such as Ethernet, Token Ring, and FDDI, or X.25 wide area networking and frame relay technologies. To get a working network in this case, it is enough to purchase software and hardware related to one basic technology - network adapters with drivers, hubs, switches, cable system, etc. - and connect them in accordance with the requirements of the standard for this technology.

To date, the most common local area network standard is Ethernet packet data transmission technology. Ethernet standards define wired connections and electrical signals at the physical layer, the frame format and media access control protocols - at the data link layer of the OSI model. Ethernet is mainly described by the IEEE 802.3 group standards. The transmission medium is a coaxial cable, twisted pair or optical cable. Computers are connected to a shared environment in accordance with the typical "common bus" structure. With a time-shared bus, any two computers can communicate.

All types of Ethernet standards (including Fast Ethernet and Gigabit Ethernet) use the same media separation method - CSMA/CD (Carrier Sense Multiple Access with Collision Detection) The essence of the random access method is as follows. A computer on an Ethernet network can only transmit data over the network if the network is free, that is, if no other computer is currently communicating. Therefore, an important part of Ethernet technology is the procedure for determining the availability of the medium. After the computer is convinced that the network is free, it starts the transfer, while "capturing" the medium. The time of exclusive use of the shared environment by one node is limited to the time of transmission of one frame. A frame is a unit of data exchanged between computers on an Ethernet network. The frame has a fixed format and, along with the data field, contains various service information, such as the recipient's address and the sender's address. The Ethernet network is designed so that when a frame enters a shared data transmission medium, all network adapters simultaneously begin to receive this frame. All of them parse the destination address, located in one of the initial fields of the frame, and if this address matches their own address, the frame is placed in the network adapter's internal buffer. Thus, the destination computer receives the data intended for it. Sometimes a situation may arise when two or more computers simultaneously decide that the network is free and start transmitting information. This situation, called a collision, prevents the correct transmission of data over the network. The Ethernet standard provides an algorithm for detecting and correctly handling collisions. The probability of a collision depends on the intensity of network traffic. After a collision is detected, NICs that attempted to transmit their frames stop transmitting and, after a random pause, attempt to access the medium again and transmit the frame that caused the collision.

The main advantage of Ethernet networks, which made them so popular, is their cost-effectiveness. To build a network, it is enough to have one network adapter for each computer, plus one physical cable segment of the required length. Other basic technologies, such as Token Ring, require an additional device - a hub - to create even a small network. In addition, fairly simple algorithms for accessing the medium, addressing and transmitting data are implemented in Ethernet networks. The simple logic of the network leads to a simplification and, accordingly, a reduction in the cost of network adapters and their drivers. For the same reason, Ethernet network adapters are highly reliable. And, finally, another remarkable property of Ethernet networks is their good extensibility, that is, the ease of connecting new nodes. Other basic network technologies - Token Ring, FDDI - although they have many individual features, at the same time have many properties in common with Ethernet. Significant differences between one technology and another are related to the peculiarities of the used method of access to a shared environment. Thus, the differences between Ethernet technology and Token Ring technology are largely determined by the specifics of the media separation methods embedded in them - a random access algorithm in Ethernet and an access method by passing a token in Token Ring.

The CAN bus is used to combine all blocks of the Vityaz train safety management system. Let's consider this interface in more detail.

CAN (Control Area Network) - a serial bus that provides a local network of "intelligent" input / output devices, sensors and actuators of a mechanism or even an enterprise. It is characterized by a protocol that provides the possibility of finding several master devices on the highway, providing real-time data transmission and error correction, high noise immunity. The CAN system consists of a large number of microcircuits that provide the operation of devices connected to the bus, which were originally developed by BOSH for use in automobiles and are now widely used in industrial automation. The transfer rate is set by software and can be up to 1 Mbps.

But in practice, a CAN network usually means a bus topology network with a physical layer in the form of a differential pair, defined in the ISO 11898 standard. Transmission is carried out in frames that are received by all network nodes. To access the bus, specialized chips are produced - CAN bus drivers.

The CAN system works very reliably. If any malfunctions occur, they are necessarily recorded in the respective fault memory and can then be read out using the diagnostic tool.

Figure 1.5.1 CAN system

The network combines several control units. Control units are connected to it through transceivers (transceivers). Thus, all individual stations of the network are in the same conditions. That is, all control blocks are equivalent and none of them has priority. In this case, they talk about the so-called multi-subscriber architecture. The exchange of information is carried out by transmitting serial signals.

The process of information exchange consists in the exchange of individual messages, frames. These messages can be sent and received by each of the control units. Each of the messages contains data about some physical parameter of the system. In this case, the value is represented in binary form, i.e., as a sequence of zeros and ones or bits. For example, an engine speed of 1800 rpm can be represented as the binary number 00010101. In signaling, each binary number is converted into a stream of serial pulses (bits). These pulses are fed through the TX wire (transmitting wire) to the input of the transceiver (amplifier). The transceiver converts the sequences of current pulses into corresponding voltage signals, which are then serially transmitted to the bus wire. When receiving signals, the transceiver converts the voltage pulses into bit sequences and transmits them through the RX wire (receive wire) to the control unit. In the control unit, the binary signal sequences are again converted into message data. For example, the binary number 00010101 is converted to a speed of 1800 rpm.

The transmitted message can be received by each of the control units. This principle of data transmission is called broadcast, since it is similar to the principle of operation of a broadcast radio station, the signals of which are received by each user of the radio network. This data transfer principle ensures that all control units connected to the network receive the same information at any given time. Each message is provided with an identifier that defines the destination of the transmitted data, but not the address of the receiver. Any receiver can respond both to one identifier and to several. Several receivers can respond to one identifier.

Figure 1.5.2 Principle of CAN messaging

The control unit receives the sensor signals, processes them and transmits the corresponding control signals to the actuators. The most essential components of the control unit are a microcontroller with input and output memories and a memory for storing software. The sensor signals received by the control unit, for example a temperature sensor or a crankshaft speed sensor, are regularly called up and stored sequentially in the input memory. In the microcontroller, the input signals are processed in accordance with the programs embedded in it. The signals generated as a result of this processing are sent to the cells of the output storage device, from where they arrive at the corresponding actuating devices. To process messages coming from and sent to the CAN bus, each control unit is equipped with an additional memory device that stores both incoming and outgoing messages.

The CAN system module is used for data exchange via the CAN bus. It is divided into two zones: the receiving zone and the transmitting zone. The CAN system module is connected to the control unit via mailboxes for incoming and outgoing messages. It is usually built into the microcontroller chip of the control unit.

The transceiver is a transceiver device that simultaneously performs the functions of an amplifier. It converts the sequence of binary signals coming from the CAN system module (on a logical level) into electrical voltage pulses and vice versa. Thus, by means of electrical impulses, data can be transmitted over copper wires. The transceiver communicates with the CAN system module via the TX (transmitting wire) and RX (receiving wire) wires. The RX wire is connected to the CAN bus through an amplifier. It allows you to constantly "listen" to the digital signals transmitted via the bus.

With a free bus, any node can start transmitting at any time. In the case of simultaneous transmission of frames by two or more nodes, access arbitration: By passing the source address, the node simultaneously checks the status of the bus. If a dominant bit is received during the transmission of a recessive bit, it is considered that another node is transmitting a message with a higher priority and the transmission is postponed until the bus is free. Thus, in contrast to, for example, Ethernet, in CAN there is no overhead loss of channel bandwidth during collisions. The cost of this solution is the chance that low priority messages will never be transmitted.

All stations connected to the bus receive the message sent by the control unit. This message is sent to the receiving areas of the respective modules of the CAN system via the RX wires. After that, they can determine at the control level by the CRC (Cycling Redundancy Check) sum whether there are transmission errors in the message.

Advantages

    Ability to work in hard real time.

    Ease of implementation and minimal cost of use.

    High resistance to interference.

    Arbitration of network access without loss of bandwidth.

    Reliable control of transmission and reception errors.

    Wide spread of technology, availability of a wide range of products from various suppliers.

    Easier connection of additional equipment.

Flaws

    A small amount of data that can be transferred in one packet (up to 8 bytes).

    Large size of service data in the packet (in relation to payload data).

    The absence of a single generally accepted standard for a high-level protocol, however, is also an advantage. The network standard provides ample opportunity for virtually error-free data transfer between nodes, leaving the developer free to invest in this standard everything that can fit there.

      USB interface

In the fourth chapter of this graduation project, the writing of the RFP for the RPDP testing stand will be carried out. This stand will connect to CAN via USB, so it was decided to study the USB interface.

USB (Universal Serial Bus) is an industry standard for extending the architecture of a PC computer.

The USB architecture is defined by the following criteria:

Easy-to-implement PC peripheral extension;

Transfer rate up to 12 Mbps (version 1.1), up to 480 Mbps (version 2.0), up to 4.8 Gbps (version 3.0);

Ability to integrate into PC computers of any size and configuration;

Easy creation of devices-extensions of PC computers.

From the user's point of view, the important parameters of USB are the following:

Ease of connection to a PC computer, i.e. it is impossible to connect the device incorrectly;

It is not required to turn off the power before connecting due to the design of the connectors;

Hiding electrical connection details from the end user;

Self-identifying peripherals (Plug & Play);

Possibility of dynamic connection of peripheral devices;

Low power devices (up to 500mA) can be powered directly from the USB bus.

The physical connection of devices is carried out according to the topology of a multi-tiered star. The center of each star is a hub (provides additional connection points). Each cable segment connects two points - a hub with another hub or function (represents an end peripheral). The system has, and only one, a host controller located at the top of the pyramid of functions and hubs and managing the operation of the entire system. The host controller integrates with the root hub (Root Hub), which provides one or more connection points - ports. The USB controller included in the chipsets usually has a built-in two-port root hub.

Logically, a device connected to any port on the USB hub can be considered as being directly connected to the host controller. Thus, the connection point of the device is not important.

The host controller distributes the bus bandwidth between devices. The USB bus allows you to connect, configure, use and disconnect devices while the host and the devices themselves are running.

Functions are devices capable of transmitting or receiving data or control information over the bus. Typically, functions are separate peripherals connected to a hub port with a USB cable. Each feature provides configuration information that describes the device's capabilities and resource requirements. Before use, the function must be configured by the host - it must be allocated bandwidth in the channel and selected configuration options.

The hub is a cable hub. Connection points are called hub ports. Each hub transforms one connection point into many. The architecture allows connection of several hubs. Each hub has one upstream port (Upstream Port) designed to connect to the upper level hub and one or more downstream ports (Downstream Port) intended to connect functions or lower level hubs. The hub recognizes the connection and disconnection of devices and controls the power supply to the downstream segments.

To save the programmer from the routine work of writing a driver, some operating systems deliberately include low-level drivers. The Windows system includes:

    the host controller driver (USB Bus Driver) is responsible for managing transactions, power, and device recognition;

    the bus driver (USB Bus Driver) is responsible for managing transactions, power and device recognition;

    class driver.

From the programmer's point of view, the class driver and the interface for calling this driver are of most interest. Here the operating system takes a step towards the unification of interfaces. All USB devices are divided into groups (hubs, HID devices, audio, storage devices, printers, communication devices) according to common properties, functions performed and resource requirements. For each device group, Windows provides a separate driver that is automatically installed when a device is found to belong to one of the groups. Thus, in most cases, no drivers are required.

USB HID (human interface device) class - a class of USB devices for human interaction. This class includes devices such as keyboard, mouse, game controller. This is one of the first USB classes supported by the operating system. Windows system. A HID device, in addition to entering data into a computer, can also receive them from it. If you need to send data to a HID device, you must initiate a connection with this device and then work with it as with a regular file.

In this chapter, an overview of the main data transmission technologies has been made. To carry out the process of information exchange between a computer and a train, it was decided to study the existing technologies for wireless data transmission in order to subsequently select the most appropriate one (Chapter 2). In addition to physical layer wireless technologies, link layer technologies (Ethernet, Frame Relay, ATM) were considered.

In this section, the main technologies for identifying objects were also considered. Among them, special attention was paid to RFID and optical identification, which can be used to initiate the connection of a fixed control room with a train traffic recorder (TRDR).

Most residents of modern cities daily transmit or receive any data. It can be computer files, a television picture, a radio broadcast - everything that represents a certain portion of useful information. There are a huge number of technological methods of data transmission. At the same time, in many segments of information solutions, the modernization of the corresponding channels is taking place at an incredibly dynamic pace. Conventional technologies, which, it would seem, may well satisfy human needs, are being replaced by new, more advanced ones. More recently, access to the Web through cellular telephone was considered almost exotic, but today this option is familiar to most people. Modern file transfer speeds over the Internet, measured in hundreds of megabits per second, seemed like something fantastic to the first users of the World Wide Web. Through what types of infrastructures can data be transferred? What could be the reason for choosing one or another channel?

Basic Data Transfer Mechanisms

The concept of data transmission can be associated with various technological phenomena. In general, it is associated with the computer communications industry. Data transfer in this aspect is the exchange of files (sending, receiving), folders and other implementations of machine code.

The term under consideration can also correlate with the non-digital sphere of communications. For example, the transmission of a TV signal, radio, the operation of telephone lines - if we are not talking about modern high-tech tools - can be carried out using analog principles. In this case, data transmission is the transmission of electromagnetic signals through one channel or another.

An intermediate position between two technological implementations of data transmission - digital and analog - can take mobile connection. The fact is that some of the relevant communication technologies belong to the first type - for example, GSM communication, 3G or 4G Internet, others are less computerized and therefore can be considered analog - for example, voice communication in AMPS or NTT standards.

However, the modern trend in the development of communication technologies is such that data transmission channels, no matter what type of information is transmitted through them, are actively "digitized". In large Russian cities, it is difficult to find telephone lines that operate according to analog standards. Technologies like AMPS are gradually losing relevance and being replaced by more advanced ones. TV and radio are becoming digital. Thus, we may consider modern technologies data transmission mainly in a digital context. Although the historical aspect of the involvement of certain decisions, of course, will be very useful to explore.

Modern data transmission systems can be classified into 3 main groups: implemented in computer networks, used in mobile networks, which are the basis for organizing TV and radio broadcasts. Let's consider their specifics in more detail.

Data transmission technologies in computer networks

The main subject of data transfer in computer networks, as we noted above, is a collection of files, folders and other products of machine code implementation (for example, arrays, stacks, etc.). Modern digital communications can operate on the basis of a variety of standards. Among the most common is TCP-IP. Its main principle is to assign a unique IP address to a computer, which can be used as the main reference point for data transfer.

File exchange in modern digital networks can be carried out using wired technologies or those that do not involve the use of a cable. The classification of the corresponding infrastructures of the first type can be carried out on the basis of a specific type of wire. In modern computer networks, the most commonly used are:

twisted pairs;

fiber optic wires;

coaxial cables;

USB cables;

Telephone wires.

Each of the noted types of cables has both advantages and disadvantages. For example, twisted pair is a cheap, versatile and easy-to-install type of wire, but it is significantly inferior to fiber in terms of bandwidth (we will consider this parameter in more detail a little later). USB cables are the least suitable for data transfer within computer networks, but they are compatible with almost any modern computer - it is extremely rare to find a PC that is not equipped with USB ports. Coaxial cables are sufficiently protected from interference and allow data transmission over very long distances.

Characteristics of computer data networks

It will be useful to study some of the key characteristics of computer networks in which files are exchanged. Among the most important parameters relevant infrastructure - throughput. This characteristic allows you to evaluate what the maximum indicators of the speed and amount of data transmitted in the network can be. Actually, both of these parameters are also key. The data transfer rate is an actual measure of how much files can be transferred from one computer to another in a given amount of time. The parameter under consideration is most often expressed in bits per second (in practice, as a rule, in kilo-, mega-, gigabits, in powerful networks - in terabits).

Classification of computer data transmission channels

Data exchange when using a computer infrastructure can be carried out through three main types of channels: duplex, simplex, and half-duplex. The channel of the first type assumes that the device for transmitting data to the PC can also be a receiver at the same time. Simplex devices, in turn, are only capable of receiving signals. Half-duplex devices provide the use of the function of receiving and transmitting files in turn.

Wireless data transmission in computer networks is carried out most often through standards:

- "small radius" (Bluetooth, infrared ports);

- "medium radius" - Wi-Fi;

- "long range" - 3G, 4G, WiMAX.

The speed at which files are transferred can vary greatly depending on a particular communication standard, as well as the stability of the connection and its immunity from interference. Wi-Fi is considered one of the best solutions for organizing home intracorporate computer networks. If data transmission over long distances is necessary, 3G, 4G, WiMax, or other technologies that compete with them are used. Bluetooth remains in demand, and to a lesser extent, infrared ports, since their activation practically does not require the user to fine-tune the devices through which files are exchanged.

The most popular "short range" standards are in the mobile device industry. So, data transfer to android from another similar OS or compatible is often carried out just the same with via Bluetooth. However, mobile devices can quite successfully integrate with computer networks, for example, using Wi-Fi.

A computer data transmission network functions through the use of two resources - hardware and the necessary software. Both are necessary for the organization of a full-fledged file exchange between PCs. Data transfer programs can be used in a variety of ways. They can be conditionally classified according to such a criterion as the scope.

There is custom software adapted to the use of web resources - such solutions include browsers. There are programs used as a tool for voice communication, supplemented by the ability to organize video chats - for example, Skype.

There is software that belongs to the system category. Appropriate solutions may be practically not involved by the user, however, their operation may be necessary to ensure the exchange of files. As a rule, such software works at the level of background programs in the structure of the operating system. These types of software allow you to connect a PC to a network infrastructure. Based on such connections, user tools can already be used - browsers, video chat programs, etc. System solutions are also important for ensuring the stability of network connections between computers.

There is software designed to diagnose connections. So, if one or another data transfer error interferes with a reliable connection between a PC, then it can be calculated using a suitable diagnostic program. The use of various types of software is one of the key criteria for distinguishing between digital and analog technologies. When using a traditional data infrastructure software solutions have, as a rule, incomparably less functionality than when building networks based on digital concepts.

Data transmission technologies in cellular networks

Let us now study how data can be transferred in other large-scale infrastructures − cellular networks. Considering this technological segment, it will be useful to pay attention to the history of the development of relevant solutions. The fact is that the standards by which data is transmitted in cellular networks are developing very dynamically. Some of the solutions discussed above that are used in computer networks remain relevant for many decades. This is especially evident in the example of wired technologies - coaxial cable, twisted pair, fiber optic wires were introduced into the practice of computer communications a very long time ago, but the resource for their use is far from being exhausted. In turn, almost every year new concepts appear in the mobile industry, which can be put into practice with varying degrees of intensity.

So, the evolution of cellular technologies begins with the introduction in the early 80s of the earliest standards - such as NMT. It can be noted that its capabilities were not limited to providing voice communications. Data transfer via NMT networks was also possible, but at a very low speed - about 1.2 Kbps.

The next step in technological evolution in the cellular communications market was associated with the introduction of the GSM standard. The data transfer rate when using it was assumed to be much higher than in the case of using NMT - about 9.6 Kbps. Subsequently, the GSM standard was supplemented by HSCSD technology, the activation of which allowed cellular subscribers to transmit data at a speed of 57.6 Kbps.

Later, the GPRS standard appeared, through which it became possible to separate typically “computer” traffic transmitted in cellular channels from voice traffic. The data transfer rate when using GPRS could reach about 171.2 Kbps. The next technological solution implemented by mobile operators was EDGE standard. It made it possible to provide data transmission at a speed of 326 Kbps.

The development of the Internet required developers of cellular communication technologies to introduce solutions that could become competitive with wired standards - primarily in terms of data transfer speed, as well as connection stability. A significant step forward was the introduction of the UMTS standard to the market. This technology made it possible to provide data exchange between subscribers of a mobile operator at a speed of up to 2 Mbps.

Later, the HSDPA standard appeared, in which the transmission and reception of files could be carried out at speeds up to 14.4 Mbps. Many digital industry experts believe that since the introduction of HSDPA technology, cellular operators have begun to compete directly with cable ISPs.

In the late 2000s, the LTE standard and its competitive counterparts appeared, through which subscribers of cellular operators were able to exchange files at a speed of several hundred megabits. It can be noted that such resources are not always available even for users of modern wired channels. Most Russian providers provide their subscribers with a data transmission channel at a speed not exceeding 100 Mbit / s, in practice - most often several times less.

Generations of cellular technology

The NMT standard generally refers to the 1G generation. GPRS and EDGE technologies are often classified as 2G, HSDPA as 3G, and LTE as 4G. It should be noted that each of the noted solutions has competitive analogues. For example, some experts refer WiMAX to those in relation to LTE. Other competitive solutions in relation to LTE in the 4G technology market are 1xEV-DO, IEEE 802.20. There is a point of view according to which the LTE standard is still not quite correct to classify as 4G, since according to top speed it falls slightly short of the 1Gb/s defined for the 4G concept. Thus, it is possible that in the near future a new standard will appear on the global cellular market, perhaps even more advanced than 4G and capable of providing data transfer at such an impressive speed. In the meantime, among those solutions that are being implemented most dynamically is LTE. Leading Russian operators are actively modernizing their respective infrastructure throughout the country — ensuring high-quality data transmission according to the 4G standard is becoming one of the key competitive advantages in the cellular communications market.

TV broadcast technologies

Digital data transmission concepts can also be used in the media industry. For a long time Information Technology in the organization of broadcasts of television and radio were not introduced too actively - mainly due to the limited profitability of the corresponding improvements. Solutions that combined digital and analog technologies were often used. So, the infrastructure of the television center could be fully "computerized". However, analogue programs were broadcast for subscribers of television networks.

As the Internet spreads and channels become cheaper computer transmission These players in the television and radio industry began to actively "digitize" their infrastructure, integrate it with IT solutions. In different countries of the world, standards for television broadcasting in digital format have been approved. Of these, the most common are DVB, adapted for the European market, ATSC, used in the USA, ISDB, used in Japan.

Digital solutions in the radio industry

Information technology is also actively involved in the radio industry. It can be noted that such solutions are characterized by certain advantages in comparison with analog standards. Thus, in digital radio broadcasts, a significantly higher sound quality can be achieved than when using FM channels. The digital data network theoretically gives radio stations the ability to send not only voice traffic to subscriber radios, but also any other media content - pictures, videos, texts. Appropriate solutions can be implemented in the infrastructure for organizing digital television broadcasts.

Satellite data channels

In a separate category, satellite channels should be allocated, through which data can be transmitted. Formally, we have the right to classify them as wireless, but the scale of their use is such that it would not be entirely correct to combine the corresponding solutions into one class with Wi-Fi and Bluetooth. Satellite data transmission channels can be used - in practice this is what happens - when building almost any type of communication infrastructure from those that we have listed above.

By means of "plates" it is possible to organize the connection of PCs in a network, connect them to the Internet, ensure the functioning of television and radio broadcasts, and increase the level of technological effectiveness of mobile services. The main advantage of satellite channels is inclusiveness. Data transmission can be carried out when they are involved in almost any place on the planet - as well as reception - from anywhere in the world. Satellite solutions also have some technological disadvantages. For example, when transferring computer files using a "dish", there may be a noticeable delay in response, or "ping" - the time interval between the moment a file is sent from one PC and received on another.

Almost every modern company has a need to improve the efficiency of networks and computer systems technologies. One of necessary conditions for this - the seamless transfer of information between servers, data stores, applications and users. It is the way data is transferred to information systems often becomes a "bottleneck" in terms of performance, nullifying all the advantages of modern servers and storage systems. Developers and system administrators trying to eliminate the most obvious bottlenecks, although they know that after the bottleneck in one part of the system is eliminated, it appears in another.

For many years, bottlenecks have occurred predominantly in servers, but as servers have evolved functionally and technologically, they have moved to networks and networked storage systems. Recently, very large storage arrays have been created, which brings bottlenecks back to the network. Data growth and centralization, as well as the bandwidth demands of next-generation applications, often consume all available bandwidth.

When an information service manager faces the task of creating a new or expanding an existing information processing system, one of the most important issues for him will be the choice of data transmission technology. This problem includes the choice not only of network technology, but also of the protocol for connecting various peripheral devices. The most popular solutions widely used to build SANs (Storage Area Networks) are Fiber Channel, Ethernet, and InfiniBand.

Ethernet technology

Today, Ethernet technology is at the forefront of the high-performance LAN sector. Enterprises all over the world are investing in Ethernet cabling and equipment, and in staff training. The widespread use of this technology makes it possible to keep low prices on the market, and the cost of implementing each new generation of networks tends to decrease. The constant growth of traffic in modern networks forces operators, administrators and architects corporate networks look to faster network technologies to solve the problem of bandwidth shortages. The addition of 10-Gigabit Ethernet to the Ethernet family makes it possible to support new resource-intensive applications on LANs.

Appearing more than a quarter of a century ago, Ethernet technology soon became dominant in building local area networks. Due to the ease of installation and maintenance, reliability and low cost of implementation, its popularity has grown so much that today we can safely say that almost all traffic on the Internet begins and ends in Ethernet networks. The IEEE 802.3ae 10-Gigabit Ethernet standard, approved in June 2002, marked a turning point in the development of this technology. With its advent, the area of ​​use of Ethernet is expanding to the scale of city (MAN) and wide area (WAN) networks.

There are a number of market factors that industry analysts say are driving 10-Gigabit Ethernet technology to the forefront. In the development of network technologies, the emergence of an alliance of developer companies, the main task of which is to promote new networks, has already become traditional. 10-Gigabit Ethernet was no exception. At the origins of this technology was the organization 10-Gigabit Ethernet Alliance (10 GEA), which included such giants of the industry as 3Com, Cisco, Nortel, Intel, Sun and many other companies (more than a hundred in total). Whereas in previous versions of Fast Ethernet or Gigabit Ethernet developers borrowed certain elements of other technologies, the specifications of the new standard were created almost from scratch. In addition, the 10-Gigabit Ethernet project was focused on large transport and backbone networks, for example, city-wide, while even Gigabit Ethernet was developed exclusively for use in local networks.

The 10-Gigabit Ethernet standard provides for the transmission of information flow at speeds up to 10 Gb / s over single- and multi-mode optical cable. Depending on the transmission medium, the distance can be from 65 m to 40 km. The new standard was supposed to meet the following basic technical requirements:

  • bidirectional data exchange in duplex mode in point-to-point topology networks;
  • support for data transfer rates of 10 Gb / s at the MAC level;
  • LAN PHY physical layer specification for connecting to local networks operating at the MAC layer with a data rate of 10 Gbps;
  • specification of the WAN PHY physical layer for connecting to SONET/SDH networks, operating at the MAC layer with a data rate compatible with the OC-192 standard;
  • determining a mechanism for adjusting the MAC layer data rate to the WAN PHY data rate;
  • support for two types of fiber optic cable - single-mode (SMF) and multi-mode (MMF);
  • media-independent interface specification XGMII*;
  • backwards compatible with previous versions of Ethernet (preservation of packet format, size, etc.).

* XG here stands for 10 Gigabit and MII stands for Media Independent Interface.

Recall that the 10/100 Ethernet standard defines two modes: half-duplex and full-duplex. Half-duplex in the classic version provides for the use of a shared transmission medium and the CSMA / CD (Carrier-Sense Multiple Access / Collision Detection) protocol. The main disadvantages of this mode are the loss of efficiency with an increase in the number of simultaneously operating stations and distance restrictions associated with the minimum packet length (which is 64 bytes). Gigabit Ethernet technology uses a carrier extension technique to keep the packet length as short as possible, pushing it up to 512 bytes. Since the 10-Gigabit Ethernet standard is designed for point-to-point backbone connections, half-duplex mode is not included in its specification. Therefore, in this case, the channel length is limited only by the characteristics of the physical medium, the receiving/transmitting devices used, the signal power, and the modulation methods. The necessary topology can be provided, for example, using switches. The duplex transmission mode also makes it possible to keep a minimum packet size of 64 bytes without the use of carrier extension techniques.

In accordance with the reference model of open systems interconnection (OSI), network technology is defined by two lower layers: physical (Layer 1, Physical) and channel (Layer 2, Data Link). In this scheme, the Ethernet physical device (PHY) layer corresponds to Layer 1, and the medium access control (MAC) layer corresponds to Layer 2. In turn, each of these layers, depending on the implemented technology, may contain several sublayers.

The MAC (Media Access Control) layer provides a logical connection between MAC clients of peer workstations. Its main functions are initializing, managing and maintaining a connection with a network peer. Obviously, the normal data transfer rate from the MAC layer to the PHY physical layer for the 10 Gigabit Ethernet standard is 10 Gbps. However, the WAN PHY layer must transfer data at a slightly slower rate to accommodate SONET OC-192 networks. This is achieved using the mechanism of dynamic adaptation of the interframe interval, which provides for its increase by a predetermined period of time.

The Reconciliation Sublayer (Figure 1) is the interface between the serial data stream of the MAC layer and the parallel stream of the XGMII sublayer. It maps MAC layer data octets to parallel XGMII paths. XGMII is a media independent 10 Gigabit interface. Its main function is to provide a simple and easily implemented interface between the link and physical layers. It isolates the link layer from the specifics of the physical one and thus allows the first to work on a single logical level with different implementations of the second. XGMII consists of two independent transmit and receive channels, each carrying 32 bits of data over four 8-bit paths.

Rice. 1. 10-Gigabit Ethernet layers.

The next part of the protocol stack is related to the 10 Gigabit Ethernet physical layer. The Ethernet architecture breaks the physical layer into three sublayers. Physical Coding Sublayer PCS (Physical Coding Sublayer) performs encoding/decoding of the data stream coming from and to the data link layer. The PMA (Physical Media Attachment) sublayer is a parallel-to-serial (forward and reverse) converter. It performs the transformation of a group of codes into a bit stream for serial bit-oriented transmission and inverse transformation. The same sublayer provides synchronization of reception/transmission. The PMD (Physical Media Dependent) sublayer is responsible for signaling in a given physical medium. Typical functions of this sublevel are signal shaping and amplification, modulation. Different PMD devices support different physical media. In turn, the Media Dependent Interface (MDI) defines connector types for different physical media and PMD devices.

10-Gigabit Ethernet technology provides a low cost of ownership compared to alternatives, including both acquisition and support costs, as customers' existing Ethernet networking infrastructure seamlessly interoperates with it. In addition, 10 Gigabit Ethernet appeals to administrators with a familiar management organization and the ability to apply lessons learned, as it leverages the processes, protocols, and controls already deployed in the existing infrastructure. It is worth recalling that this standard provides flexibility in designing connections between servers, switches, and routers. Thus, Ethernet technology offers three main advantages:

  • ease of use,
  • high throughput,
  • low cost.

In addition, it is simpler than some other technologies, because it allows networks located in different places to be connected as parts of a single network. Ethernet bandwidth is scalable in steps from 1 to 10 Gb/s, which makes better use of network capacity. Finally, Ethernet equipment is generally more cost effective than traditional telecommunications equipment.

To illustrate the possibilities of technology, we will give one example. Using a 10-Gigabit Ethernet network, a team of scientists working on the Japanese Data Reservoir project (http://data-reservoir.adm.su-tokyo.ac.jp) transmitted data from Tokyo to the Research Center for Elementary Physics in Geneva. CERN particles. The data link crossed 17 time zones and spanned 11,495 miles (18,495 km). A 10-Gigabit Ethernet link connected computers in Tokyo and Geneva as part of the same local area network. The network used optical equipment and Ethernet switches from Cisco Systems, Foundry Networks and Nortel Networks.

In recent years, Ethernet has also become widely used by telecom operators to connect objects within the city. But the Ethernet network can stretch even further, spanning entire continents.

fiber channel

Fiber Channel technology makes it possible to fundamentally change the computer network architecture of any large organization. The fact is that it is well suited for the implementation of a centralized SAN data storage system, where disk and tape drives are located in their own separate network, including geographically quite remote from the main corporate servers. Fiber Channel is a serial communications standard designed for high-speed communications between servers, drives, workstations, and hubs and switches. Note that this interface is almost universal, it is used not only to connect individual drives and data storages.

When the first networks emerged, designed to bring computers together to work together, it turned out to be convenient and effective to bring resources closer to workgroups. Thus, in an attempt to minimize network load, the storage media were evenly divided among multiple servers and desktops. There are two data transmission channels simultaneously in the network: the network itself, through which the exchange takes place between clients and servers, and the channel through which data is exchanged between the system bus of the computer and the storage device. This can be a link between a controller and a hard drive, or between a RAID controller and an external disk array.

This separation of channels is largely due to different requirements for data transfer. In the network, in the first place is the delivery of the necessary information to one client out of many possible ones, for which it is necessary to create certain and very complex addressing mechanisms. Moreover, network channel involves significant distances, so a serial connection is preferred here for data transfer. But the storage channel performs an extremely simple task, providing the ability to exchange with a previously known data storage device. The only thing that is required of him is to do it as quickly as possible. Distances are usually small.

However, today's networks are faced with the challenges of processing more and more data. High-speed multimedia applications, image processing require much more I/O than ever before. Organizations are forced to store ever-larger amounts of data online, requiring more external storage capacity. The need for insurance copying of huge amounts of data requires the separation of secondary memory devices at ever greater distances from the processing servers. In some cases, it turns out that combining server and storage resources into a single pool for a data center using Fiber Channel is much more efficient than using a standard set of Ethernet plus a SCSI interface.

The ANSI Institute registered a working group to develop a method for high-speed data exchange between supercomputers, workstations, PCs, drives and display devices back in 1988. And in 1992, the three largest computer companies - IBM (http://www.ibm.com ), Sun Microsystems (http://www.sun.com) and HP (http://www.hp.com) created the FSCI (Fiber Channel Systems Initiative) initiative group, which was tasked with developing a method for the rapid transmission of digital data . The group has developed a number of preliminary specifications - profiles. Since fiber-optic cables were supposed to become the physical medium for the exchange of information, the word fiber appeared in the name of the technology. However, a few years later, the possibility of using copper wires was added to the relevant recommendations. Then the ISO (International Standard Organization) committee proposed replacing the English spelling fiber with the French fiber in order to somehow reduce associations with the fiber optic medium, while retaining almost the original spelling. When the preliminary work on profiles has been completed, further work on support and development new technology was taken over by the Fiber Channel Association (FCA), which became an organizational member of the ANSI committee. In addition to the FCA, an independent working group FCLC (Fiber Channel Loop Community), which began to promote one of the variants of Fiber Channel technology - FC-AL (Fibre Channel Arbitrated Loop). Currently, the FCIA (Fibre Channel Industry Association, http://www.fibrechannel.org) has assumed all the coordination work to promote the Fiber Channel technology. In 1994, the FC-PH (Physical Connection and Data Transfer Protocol) standard was approved by the T11 committee of ANSI and received the designation X3.203-1994.

Fiber Channel technology has a number of advantages that make this standard convenient when organizing data exchange in groups of computers, as well as when used as an interface for mass storage devices, in local area networks and when choosing means of accessing global networks. One of the main advantages of this technology is its high data transfer rate.

FC-AL is just one of three possible Fiber Channel topologies that are particularly used for storage systems. In addition to it, a point-to-point topology and a star topology based on switches and hubs are possible. A network that is built on the basis of switches connecting many nodes (Fig. 2) is called a factory in Fiber Channel terminology.

Rice. 2. Factory based on Fiber Channel.

Up to 126 hot-swappable devices can be included in an FC-AL loop. When using a coaxial cable, the distance between them can reach 30 m, while in the case of a fiber optic cable, it increases to 10 km. The technology is based on the method of simply moving data from the transmitter buffer to the receiver buffer with full control of this and only this operation. For FC-AL, it is completely unimportant how data is processed by individual protocols before and after buffering, as a result of which the type of data transmitted (commands, packets or frames) does not play any role.

The Fiber Channel architectural model describes in detail the connection parameters and exchange protocols between individual nodes. This model can be represented as five functional layers that define the physical interface, transmission protocol, signaling protocol, general procedures, and mapping protocol. The numbering goes from the lowest hardware level FC-0, which is responsible for the parameters of the physical connection, to the top software FC-4, which interacts with higher-level applications. The mapping protocol provides communication with I/O interfaces (SCSI, IPI, HIPPI, ESCON) and network protocols (802.2, IP). In this case, all supported protocols can be used simultaneously. For example, the FC-AL interface, which works with IP and SCSI protocols, is suitable for both system-to-system and system-to-peripheral exchanges. This eliminates the need for additional I/O controllers, greatly reduces cabling complexity and, of course, overall cost.

Since Fiber Channel is a low-level protocol that does not contain I/O commands, communication with external devices and computers is provided by higher-level protocols, such as SCSI and IP, for which FC-PH serves as a transport. Network and I/O protocols (such as SCSI commands) are converted to FC-PH protocol frames and delivered to the destination. Any device (computer, server, printer, drive) that can communicate using Fiber Channel technology is called a Node port, or simply a node. Thus, the main purpose of Fiber Channel is the ability to manipulate high-level protocols using various transmission media and already existing cable systems.

The high reliability of exchange when using Fiber Channel is due to the dual-port architecture of disk devices, cyclic control of transmitted information and hot-swappable devices. The protocol supports almost any cabling system in use today. However, two media are most widely used - optics and twisted pair. Optical links are used to connect between devices on a Fiber Channel network, while twisted pair is used to connect individual components in a device (for example, drives in a disk subsystem).

The standard provides for multiple bandwidths and provides an exchange rate of 1, 2 or 4 Gb / s. Given that two optical cables are used to connect devices, each of which works in the same direction, with a balanced set of write-read operations, the data exchange rate doubles. In other words, Fiber Channel operates in full duplex mode. In terms of megabytes, the passport speed of Fiber Channel is 100, 200 and 400 MB / s, respectively. In reality, with a 50% write-read ratio, the interface speed reaches 200, 400, and 800 MB/s. Fiber Channel 2 Gb/s solutions are currently the most popular because they offer the best value for money.

Note that the equipment for Fiber Channel can be roughly divided into four main categories: adapters, hubs, switches and routers, and the latter have not yet received wide distribution.

Fiber Channel solutions are typically designed for organizations that need to keep large amounts of information online, speed up primary and secondary external storage operations for data-intensive networks, and those that have external storage away from servers farther than this. allowed in the SCSI standard. Typical applications for Fiber Channel solutions are databases and data banks, big data analysis and decision support systems, multimedia information storage and processing systems for television, film studios, as well as systems where disks must be removed at considerable distances from servers for security reasons.

Fiber Channel makes it possible to separate all data flows between enterprise servers, data archiving, etc. from the user's local network. In this option, the configuration possibilities are huge - any server can access any disk resource allowed by the system administrator, it is possible to access the same disk of several devices simultaneously, and at a very high speed. In this option, data archiving also becomes an easy and transparent task. You can create a cluster at any time, freeing up resources for it on any of the Fiber Channel storage systems. Scaling is also quite clear and understandable - depending on what features are missing, you can add either a server (which will be purchased based solely on its computing capabilities), or new system storage.

One of the very important and necessary features of Fiber Channel is the ability to segment or, as they say, system zoning. The division into zones is similar to the division into virtual networks (Virtual LAN) in a local network - devices located in different zones cannot "see" each other. The division into zones is possible either using a switched fabric (Switched Fabric) or based on the indication of a WWN (World Wide Name) address. The WWN address is similar to the MAC address in Ethernet networks, each FC controller has its own unique WWN address, which is assigned to it by the manufacturer, and any correct storage system allows you to enter the addresses of those controllers or matrix ports with which this device is allowed to work. The zoning is primarily intended to improve the security and performance of SANs. Unlike a normal network, you cannot access a device that is closed to this zone from the outside world.

FICON Technology

FICON (FIber CONnection) technology provides increased performance, enhanced functionality and communication over long distances. As a data transfer protocol, it is based on the ANSI standard for Fiber Channel systems (FC-SB-2). IBM's first general-purpose standard for communication between mainframes and external devices (such as disks, printers, and tape drives) was based on parallel connections, not too different from the multi-core cables and multi-pin connectors that were used in those years to connect desktop printers to PCs. . Many parallel wires served to carry more data "at a time" (in parallel); in mainframes it was called bus and tag.

Huge physical connectors and cabling were the only way to communicate until they hit the market in the 1990s. ESCON technologies. It was a fundamentally different technology: for the first time, instead of copper, optical fiber was used in it and data was transmitted not in parallel, but in series. Everyone was well aware that ESCON was much better and much faster, at least on paper, but it took a lot of testing and convincing buyers before the technology was widely adopted. It is believed that ESCON technology appeared during a sluggish market; in addition, devices supporting this standard were introduced with a noticeable delay, so the technology met with a lukewarm reception, and it took almost four years for it to be widely adopted.

With FICON, history has largely repeated itself. IBM first introduced this technology on S/390 servers back in 1997. It was immediately clear to many analysts that this was in many ways a technically more advanced solution. However, for several years FICON has been used almost exclusively for connecting tape drives (a vastly improved solution for backup and recovery purposes) and printers. It wasn't until 2001 that IBM finally equipped FICON with its Enterprise Storage Server, codenamed Shark. This event again coincided with a severe economic downturn, with the adoption of new technologies in enterprises slowing down. Literally a year later, a number of circumstances arose that contributed to the accelerated adoption of FICON. This time around, the concept of fiber was no longer new, and storage area network (SAN) technologies were ubiquitous both in the mainframe world and beyond.

The storage market continues to grow steadily. Today's devices, called directors, designed from the beginning to support ESCON, now support the Fiber Channel standard, and deploy FICON solutions based on the same devices. According to the developers, FICON provides significantly more functionality than Fiber Channel.

InfiniBand

The InfiniBand architecture defines a common standard for handling I/O operations for communications, networking, and storage subsystems. This new standard led to the formation of the InfiniBand Trade Association (IBTA, http://www.infinibandta.org). Simply put, InfiniBand is a next-generation I/O architecture standard that takes a networked approach to connecting data center servers, storage, and networking devices.

InfiniBand technology was developed as open solution, which could replace all other network technologies in a variety of areas. This applied to common LAN technologies (all types of Ethernet and storage networks, in particular, Fiber Channel), and specialized cluster networks (Myrinet, SCI, etc.), and even connecting I / O devices to PCs as a possible replacement PCI buses and I/O channels such as SCSI. In addition, the InfiniBand infrastructure could serve to combine fragments using different technologies into a single system. The advantage of InfiniBand over specialized, high-performance cluster-oriented networking technologies is its versatility. Oracle Corporation, for example, supports InfiniBand in its cluster solutions. A year ago, HP and Oracle set a TPC-H performance record (for 1TB databases) on a ProLiant DL585 InfiniBand cluster running Oracle 10g on Linux. In the summer of 2005, IBM achieved record highs for TPC-H (for 3TB databases) on DB2 and SuSE Linux Enterprise Server 9 in an xSeries 346-based InfiniBand cluster. At the same time, the cost per transaction achieved was almost half that of from the closest competitors.

Using a technique called switched network structure, or switching mesh, InfiniBand moves I/O traffic from server processors to edge devices and other processors or servers throughout the enterprise. As a physical channel, a special cable (link) is used, providing a data transfer rate of 2.5 Gb / s in both directions (InfiniBand 1x). The architecture is organized as a multi-layer, it includes four hardware layers and upper layers implemented in software. In each physical channel, you can organize many virtual channels by assigning different priorities to them. To increase the speed, there are 4x and 12x versions of InfiniBand, which use 16 and 48 wires, respectively, and data transfer rates on them are 10 Gb / s (InfiniBand 4x) and 30 Gb / s (InfiniBand 12x).

Solutions based on the InfiniBand architecture are in demand in four main markets: enterprise data centers (including data warehouses), high-performance computer clusters, embedded applications and communications. InfiniBand technology allows you to combine standard servers into cluster systems to provide datacenters with the performance, scalability, and fault tolerance capabilities typically only provided by platforms upper class worth millions of dollars. In addition, InfiniBand storage can be attached to server clusters, allowing all storage resources to be linked directly to compute resources. The high-performance cluster market is always aggressively looking for new ways to expand computing capabilities and therefore can greatly benefit from the high throughput, low latency and excellent scalability offered by low-cost InfiniBand products. Embedded applications such as military systems, real-time systems, video stream processing, etc. will greatly benefit from the reliability and flexibility of InfiniBand connections. In addition, the communications market is constantly demanding increased connection bandwidth, which is achieved with 10- and 30-Gbps InfiniBand connections.

The physical layer of the InfiniBand protocol defines electrical and mechanical characteristics, including fiber optic and copper cables, connectors, parameters that define hot-swappable properties. At the level of connections, the parameters of transmitted packets, operations connecting point to point, features of switching in the local subnet are defined. At the network level, the rules for routing packets between subnets are defined; within a subnet, this level is not required. The transport layer provides packet-to-message assembly, channel multiplexing, and transport services.

Let's note some key features of the InfiniBand architecture. I/O and clustering use a single InfiniBand card in the server, eliminating the need for separate cards for communications and storage (however, in a typical server, it is recommended to have two such cards configured for redundancy). You only need one connection to an InfiniBand switch per server, IP network, or SAN (redundancy comes down to simply duplicating a connection to another switch). Finally, the InfiniBand architecture resolves connectivity issues and bandwidth limitations within the server while still providing the required bandwidth and communication capability for external storage systems.

The architecture of InfiniBand consists of the following three main components (Figure 3). HCA (Host Channel Adapter) is installed inside the server or workstation that acts as the main (host). It acts as an interface between the memory controller and the outside world and serves to connect host machines to a network infrastructure based on InfiniBand technology. The HCA implements the messaging protocol and basic DMA mechanism. It connects to one or more InfiniBand switches and can communicate with one or more TCAs. The TCA (Target Channel Adapter) is designed to connect devices such as drives, disk arrays, or network controllers to the InfiniBand network. It, in turn, serves as an interface between the InfiniBand switch and the I / O controllers of peripheral devices. These controllers do not have to be of the same type or belong to the same class, which allows them to be combined into one system. different devices. In this way, TCA acts as an intermediate physical layer between the data traffic of the InfiniBand fabric and more traditional I/O controllers for other subsystems such as Ethernet, SCSI, and Fiber Channel. It should be noted that TCA can also interact with HCA directly. InfiniBand switches and routers provide central docking points, and multiple TCAs can be connected to a master HCA. InfiniBand switches form the core of the network infrastructure. With the help of many channels, they are connected to each other and to the TCA; mechanisms such as link grouping and load balancing can be implemented. If switches operate within the same subnet formed by directly connected devices, then InfiniBand routers combine these subnets, establishing a connection between several switches.


Rice. 3. The main components of the SAN network based on InfiniBand.

Much of the advanced logic capabilities of the InfiniBand system are built into the adapters that connect the nodes to the I/O system. Each type of adapter offloads the host from transport tasks by using an InfiniBand channel adapter responsible for organizing I/O messages into packets to deliver data over the network. As a result, the OS on the host and the server processor are freed from this task. It is worth noting that such an organization is fundamentally different from what happens with communications based on the TCP / IP protocol.

InfiniBand defines a highly flexible set of links and transport layer mechanisms to fine-tune the performance of an InfiniBand SAN based on application requirements, including:

  • packages of variable size;
  • maximum transfer unit size: 256, 512 bytes, 1, 2, 4 KB;
  • layer 2 local route headers (LRH, Local Route Header) to direct packets to the desired channel adapter port;
  • additional layer 3 header for global routing (GRH, Global Route Header);
  • multicast support;
  • variant and invariant checksums (VCRC and ICRC) to ensure data integrity.

The maximum transmission unit size determines system characteristics such as packet jitter, encapsulation overhead, and latency that are used when designing multiprotocol systems. The ability to omit global route information when forwarding to a local subnet destination reduces the overhead of local communication. The VCRC code is recalculated every time the next link of the communication channel passes, and the ICRC code - when the packet is received by the destination, which guarantees the integrity of the transmission over the link and over the entire communication channel.

InfiniBand defines permission-based flow control to prevent head of line blocking and packet loss, as well as link layer flow control and end-to-end flow control. Permission-based link-layer control surpasses the widely used XON/XOFF protocol in terms of its capabilities, eliminating the maximum communication range limitation and providing better link utilization. The receiving end of the link sends permissions to the transmitter indicating the amount of data that can be received reliably. Data is not transmitted until the receiver sends a permission indicating the presence free space in the receive buffer. Permission transfer mechanism between devices is built into the connection and link protocols to ensure reliable flow control. Link-layer flow control is organized on a per-VC basis, which prevents the propagation of transmission conflicts that other technologies do.

With InfiniBand, communication with remote storage modules, network functions and connections between servers will be achieved by connecting all devices through a central, unified fabric of switches and channels. The InfiniBand architecture allows I/O devices to be placed up to 17 m from the server using copper wire, up to 300 m using multimode fiber optic cable, and up to 10 km using single mode fiber.

Today, InfiniBand is gradually gaining popularity again as a backbone technology for clusters of servers and storage systems, and in data centers as the basis for connections between servers and storage systems. A lot of work is being done in this direction by an organization called the OpenIB Alliance (Open InfiniBand Alliance, http://www.openib.org). In particular, this alliance aims to develop a standard InfiniBand support software stack with open source for Linux and Windows. A year ago, support for InfiniBand technology was officially included in the Linux kernel. In addition, at the end of 2005, representatives of OpenIB demonstrated the possibility of using InfiniBand technology over long distances. The highlight of the demo was 10 Gb/s data transfer over a distance of 80.5 km. Data processing centers of a number of companies and scientific organizations participated in the experiment. At each endpoint, the InfiniBand protocol was encapsulated on SONET OC-192c, ATM, or 10 Gigabit Ethernet interfaces with no throughput degradation.

Modern information transmission systems - ϶ᴛᴏ computer networks. The set of all subscribers of a computer network is called a subscriber network. The means of communication and data transmission form a data transmission network (Fig. 2.1).

Rice. 2.1- Structural scheme computer networks.

The data transmission network consists of many geographically dispersed switching nodes connected to each other and to network subscribers using various communication channels.

The switching node is a complex of hardware and software tools that provide switching of channels, messages or packets. In this case, the term switching means the procedure for distributing information, in which the data flow arriving at the node via one communication channel is transmitted from the node via other communication channels, taking into account the required transmission route.

A hub in a data transmission network is a device that combines the load of several data transmission channels for subsequent transmission over a smaller number of channels. The use of hubs allows you to reduce the cost of organizing communication channels that connect subscribers to the data transmission network.

A communication channel is a set of technical means and a distribution medium that ensures the transmission of a message of any kind from a source to a recipient using telecommunication signals.

The structure of a computer network, built on the principle of organizing information exchange through switching nodes of a data transmission network, assumes that network subscribers do not have direct (dedicated) communication channels between themselves, but are connected to the nearest switching node and through it (and other intermediate nodes) with any other subscriber of this or even another computer network.

The advantages of building computer networks using switching nodes of a data transmission network are: a significant reduction in the total number of communication channels and their length due to the lack of the utmost importance of organizing direct channels between different network subscribers; a high degree of utilization of the bandwidth of communication channels due to the use of the same channels for the transmission of various types of information between network subscribers; the possibility of unifying technical solutions for software and hardware exchange tools for various network subscribers, including the creation of integral service nodes capable of switching information flows containing data, voice, telefax and video signals.

There are three switching methods used in data networks today: circuit switching, message switching, and packet switching.

When switching channels in a network, a direct connection is created by creating an end-to-end data transmission channel (without intermediate accumulation of information during transmission). The physical meaning of channel switching is essentially that before the start of information transmission in the network through the switching nodes, a direct electrical connection is established between the subscriber-sender and the recipient of the message. Such a connection is established by sending a special message-call by the sender, ĸᴏᴛᴏᴩᴏᴇ contains the number (address) of the called subscriber͵ and, when passing through the network, occupies communication channels along the entire path of the subsequent transmission of the message. It is obvious that when switching channels, all components of the formed end-to-end communication channel must be free. If the call is not provided in any part of the network (for example, there are no free channels between the switching nodes that make up the message transmission path), then the calling subscriber is refused to establish a connection and for the network his call is considered lost. message transmission, the sender-subscriber must repeat the call

After the connection is established, the sending subscriber receives a message that he can start data transfer. The fundamental feature of circuit switching is that all channels occupied during the establishment of a connection are used in the data transfer process simultaneously and are released only after the data transfer between subscribers is completed. A typical example of a circuit-switched network is a telephone network.

When switching messages, the message is received and accumulated in the switching node, and then its subsequent transmission is carried out. From this definition, the main difference between message switching and circuit switching follows, ĸᴏᴛᴏᴩᴏᴇ essentially lies in the fact that during message switching, messages are stored intermediately in switching nodes and processed (message priority determination, multicast propagation, message recording and archive, etc.). To process messages, they must have a network-accepted format, that is, the same type of arrangement of individual message elements. The message from the subscriber first arrives at the network switching node to which it is connected this subscriber. Further, the node processes the message and determines the direction of its further transmission, taking into account the address. If all channels in the selected transmission direction are busy, then the message waits in the queue for the moment the desired channel is released. After the message reaches the network node to which the recipient subscriber is connected, the message is issued to him in full through the communication channel between this node and the subscriber. Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, when passing through the network, a message occupies only one communication channel at any given time.

Packet switching is defined as a type of message switching in which messages are broken into pieces called packets and transmitted, received and stored as such data packets.

These packets are numbered and supplied with addresses, which allows us to transmit them over the network simultaneously and independently of each other.

Liked the article? Share with friends!
Was this article helpful?
Yes
Not
Thanks for your feedback!
Something went wrong and your vote was not counted.
Thank you. Your message has been sent
Did you find an error in the text?
Select it, click Ctrl+Enter and we'll fix it!