Hardware and software setup

Skzi remote user authentication linux. CryptoPro JCP on Linux

In this post, we decided to talk about domain authentication in Linux, using smart cards and JaCarta PKI USB tokens as the second factor of authentication. If there is quite a lot of information about local authentication through the PAM module, then the issue of domain infrastructure and authentication using Kerberos tickets in Linux is poorly considered, especially in Russian. As operating system Let's take Astra Linux and use the example of Astra Linux Directory (ALD) to show this.

The benefit of such a solution is obvious - it allows you to refuse password authentication of the user, which will help to drastically reduce the influence of the "human factor" on the security of the system. Plus, this will give a number of advantages from using electronic keys within the operating system, after authentication in the domain.

A little introductory about Astra Linux Directory (ALD) and JaCarta PKI

Domain Astra Linux Directory (ALD) intended for organization common space users (local area network domain) in automated systems.

ALD uses LDAP, Kerberos5, Samba/CIFS technologies and provides:

  • centralized storage and management of user and group accounts;
  • end-to-end authentication of users in the domain using the Kerberos5 protocol;
  • functioning of the global repository of home directories accessible via Samba/CIFS;
  • automatic configuration files UNIX, LDAP, Kerberos, Samba, PAM;
  • support for LDAP and Kerberos database compliance;
  • creation of backup copies of LDAP and Kerberos databases with the possibility of recovery;
  • integration into the domain included in the distribution DBMS, servers Email, Web servers, print servers and more.

In the environment Astra Linux Directory (ALD) electronic keys JaCarta PKI can be used for two-factor authentication of a user in a domain ALD and no passwords. Moreover, with the same electronic keys you can execute various scripts inside the OS, after authentication, such as: electronic signature, storage of key containers, access to Web resources, key forwarding in MS Windows session. Access to VDI services such as VmWare or Citrix.

Setting process

Demo zone example

  • Server - Astra Linux Smolensk
    • JaCarta IDProtect 6.37;
    • libccid;
    • pcscd;
    • libpcsclite1;
    • krb5-pkinit;
    • libengine-pkcs11-openssl;
    • opensc.
  • Client - Astra Linux Smolensk SE 1.5 4.2.0-23-generic, x86_64, with packages installed:
    • JaCarta IDProtect 6.37;
    • libccid;
    • pcscd;
    • libpcsclite1;
    • krb5-pkinit.

It is assumed that ALD has already been deployed, there is at least one domain user who can authenticate with a password, and the client and server times are the same.

Installing drivers on the server and client

To ensure work with a smart card JaCarta PKI Install the following packages on the client and server: libccid, pcscd, libpcsclite1. After installing these mandatory packages, install , which can be downloaded from the official website of "Aladdin R.D.".

To ensure work with the smart card of the Kerberos subsystem in addition to the preinstalled packages ald/kerberos install the package krb5-pkinit on the client and server.

To enable the issuance of keys and certificates for JaCarta PKI on the server also install the packages libengine-pkcs11-openssl and opensc.

Installing and configuring a certification authority on a server

As a CA (CA) will be used OpenSSL.

OpenSSL is an open source cryptographic package for dealing with SSL/TLS. Allows you to create RSA, DH, DSA keys and X.509 certificates, sign them, generate CSR and CRT.

All settings in the guide are made for the EXAMPLE.RU test domain. Let's assume that the server and the client belong to the EXAMPLE.RU domain, the server name is kdc, and the client is client. When configuring, use the name of your domain, server, and client. Do the following.

  1. Create the CA directory with mkdir /etc/ssl/CA and change to it. This directory will contain the generated keys and certificates.
  2. Create a CA key and certificate:
    $ openssl genrsa -out cakey.pem 2048
    $ openssl req -key cakey.pem -new -x509 –days 365 -out cacert.pem
    In the dialog, fill in the required information about your certificate authority. Specify EXAMPLE.RU in the Common name.
  3. Create a KDC key and certificate:
    $ openssl genrsa -out kdckey.pem 2048
    $ openssl req -new -out kdc.req -key kdckey.pem
    Fill in the required information about your server in the dialog. Specify kdc in Common name.
  4. Set environment variables. Environment variables are set within the session and are not set for other sessions and are not persisted after the session is closed.
    export REALM=EXAMPLE.RU - Your domain
    export CLIENT=kdc - your server
  5. Download the file pkinit_extensions -

File contents pkinit_extensions(it should be put in the directory from where you execute the commands):

[kdc_cert]
basicConstraints=CA:FALSE
# Here are some examples of the usage of nsCertType. If it is committed
keyUsage = nonRepudiation, digitalSignature, keyEncipherment, keyAgreement
#Pkinit EKU
extendedKeyUsage = 1.3.6.1.5.2.3.5
subjectKeyIdentifier=hash

# Copy subject details
issuerAltName=issuer:copy
# Add id-pkinit-san (pkinit subjectAlternativeName)
subjectAltName=otherName:1.3.6.1.5.2.2;SEQUENCE:kdc_princ_name

principal_name = EXP:1, SEQUENCE:kdc_principal_seq
name_type = EXP:0, INTEGER:1
name_string = EXP:1, SEQUENCE:kdc_principals
princ1 = GeneralString:krbtgt
princ2 = GeneralString:$(ENV::REALM)
[client_cert]
# These extensions are added when "ca" signs a request.
basicConstraints=CA:FALSE
keyUsage = digitalSignature, keyEncipherment, keyAgreement
extendedKeyUsage = 1.3.6.1.5.2.3.4
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
subjectAltName=otherName:1.3.6.1.5.2.2;SEQUENCE:princ_name
# Copy subject details
issuerAltName=issuer:copy
realm = EXP:0, GeneralString:$(ENV::REALM)
principal_name = EXP:1, SEQUENCE:principal_seq
name_type = EXP:0, INTEGER:1
name_string = EXP:1, SEQUENCE:principals
princ1 = GeneralString:$(ENV::CLIENT)

  1. Issue the KDC certificate: $ openssl x509 -req -in kdc.req -CAkey cakey.pem -CA cacert.pem -out kdc.pem -extfile pkinit_extensions -extensions kdc_cert –CAcreateserial –days 365
  2. Files kdc.pem, kdckey.pem, cacert.pem transfer to /var/lib/krb5kdc/
  3. Back up the /etc/krb5kdc/kdc.conf file. Edit /etc/krb5kdc/kdc.conf adding the following entries to the section: pkinit_identity = FILE:/var/lib/krb5kdc/kdc.pem,/var/lib/krb5kdc/kdckey.pem pkinit_anchors = FILE:/var/lib/krb5kdc /cacert.pem The first entry specifies the keys and server certificate, and the second entry points to the root certificate of the CA.
  4. To accept the changes, run: /etc/init.d/krb5-admin-server restart /etc/init.d/krb5-kdc restart

Smart card preparation. Issuing keys and user certificate

Make sure the packages are installed libengine-pkcs11-openssl and opensc. Connect the device to be prepared.

Initialize the device, set the user PIN. Keep in mind that initializing the device will delete all data on the JaCarta PKI beyond recovery.

For initialization, you need to use the utility pkcs11-tool.

Pkcs11-tool --slot 0 --init-token --so-pin 00000000 --label "JaCarta PKI" --module /lib64/libASEP11.so, --slot 0
--init-token– token initialization command;
--so-pin 00000000
--label "JaCarta PKI"– device tag;
--module /lib64/libASEP11.so

To set a user PIN, use the command: /

Pkcs11-tool --slot 0 --init-pin --so-pin 00000000 --login --pin 11111111 --module /lib64/libASEP11.so,

--slot 0- indicates which virtual slot the device is connected to. As a rule, this is slot 0, but other values ​​\u200b\u200bare possible - 1,2, etc .;
--init pin– user PIN setting command;
--so-pin 00000000– JaCarta PKI administrator PIN. The default value is 00000000;
--login– login command;
--pin 11111111– User PIN to be set;
--module /lib64/libASEP11.so- specifies the path to the libASEP11.so library. Installed as part of the idprotectclient package, see "Installing drivers on the server and client".

Generate keys on the device by entering the following command:

Pkcs11-tool --slot 0 --login --pin 11111111 --keypairgen --key-type rsa:2048 --id 42 --label “test1 key” --module /lib64/libASEP11.so, --slot 0- indicates which virtual slot the device is connected to. As a rule, this is slot 0, but other values ​​\u200b\u200bare possible - 1,2, etc .;
--login --pin 11111111
--keypairgen --key-type rsa:2048- indicates that keys of length 2048 bits should be generated;
--id 42- sets the CKA_ID attribute of the key. CKA_ID can be anything;
Remember this value! It is necessary for further steps of preparing the device for operation.
--label "test1 key"- sets the CKA_LABEL attribute of the key. The attribute can be anything;
--module /lib64/libASEP11.so- specifies the path to the libASEP11.so library. Installed as part of the idprotectclient package, see "Installing drivers on the server and client".

Generate a certificate request using the openssl utility. To do this, enter the following commands:


#openssl
OpenSSL> engine dynamic -pre SO_PATH:/usr/lib/ssl/engines/engine_pkcs11.so -pre ID:pkcs11 -pre LIST_ADD:1 -pre LOAD -pre MODULE_PATH:/lib64/libASEP11.so
OpenSSL> req -engine pkcs11 -new -key 0:42 -keyform engine -out client.req -subj
"/C=RU/ST=Moscow/L=Moscow/O=Aladdin/OU=dev/CN=test1 (!Your_User!)/ [email protected]"
OpenSSL>quit.
pay attention to -new -key 0:42, where 0 - virtual slot number with the device, 42 - CKA_ID attribute of previously generated keys.
The information that must be specified in the request should be specified in the field
"/C=RU/ST=Moscow/L=Moscow/O=Aladdin/OU=dev/CN=test1 (!Your_User!)/ [email protected]".

Need to set environment variables

$ export REALM=EXAMPLE.RU #Your domain
$ export CLIENT=test1 #Your user
and issue a certificate to the user.
$ openssl x509 -CAkey cakey.pem -CA cacert.pem -req -in client.req -extensions client_cert -extfile pkinit_extensions -out client.pem –days 365
Next, recode the received certificate from PEM to DER.
# openssl x509 -in client.pem -out client.cer -inform PEM -outform DER
Write the received certificate to the token.
pkcs11-tool --slot 0 --login --pin 11111111 --write-object client.cer --type "cert" --label "Certificate" --id 42 --module /lib/libASEP11.so,
where:
--slot 0- indicates which virtual slot the device is connected to. As a rule, this is slot 0, but other values ​​\u200b\u200bare possible - 1,2, etc .;
--login --pin 11111111- indicates that you should log in under the user with the PIN code "11111111". If your card has a different user PIN, enter it;
--write-object ./client.cer- indicates that it is necessary to record the object and the path to it;
--type "cert"- indicates that the type of the object being written is a certificate;
"cert" --label "Certificate"- sets the CKA_LABEL attribute of the certificate. The attribute can be anything;
--id 42- sets the CKA_ID attribute of the certificate. The same CKA_ID must be specified as for the keys;
--module /lib64/libASEP11.so- specifies the path to the libASEP11.so library.

Client setting. Health check

Create a directory on the client /etc/krb5/. Copy to /etc/krb5/ CA certificate (cacert.pem) from the server.

Configure kerberos in /etc/krb5.conf. Complete the section with the following lines.


default_realm = EXAMPLE.RU
pkinit_anchors = FILE:/etc/krb5/cacert.pem
# for token authentication
pkinit_identities=PKCS11:/lib64/libASEP11.so
Check:
kinit

When the PIN-code request line for the card appears, enter it.

To verify that a kerberos ticket was successfully retrieved for a user, issue the klist command. To delete a ticket - kdestroy.

To log in to a domain using a smart card, on the OS logon screen, instead of entering a password, enter the PIN code from the smart card.

This completes the setup. Yes, unfortunately, the system itself will not change or adjust the login window to fit the smart card, and it will be standard, but with a little secret effort, you can achieve a beautiful result.


Windows has been bundled with integrated network authentication and single sign-on for quite some time now. Prior to Windows 2000, Windows NT domain controllers provided clients with Windows Services authentication using the NTLM protocol. Although NTLM was not as secure as it originally seemed, it was very useful as it provided a convenient solution to the problem of having to maintain duplicate user accounts on different servers on the network.

Beginning with Windows 2000, Microsoft moved from NTLM to Active Directory and its integrated Kerberos authentication services. Kerberos was much more secure than NTLM and also scaled better. In addition, Kerberos was an industry standard already in use Linux systems and UNIX, which opened the door to integrating these platforms with Windows.

Linux Authentication

Initially, Linux (and the GNU tools and libraries that ran on it) did not count on a single authentication mechanism. As a consequence, Linux application developers have typically taken to developing their own authentication schemes. They were able to achieve this either by looking up username and password hashes in /etc/passwd (the text file traditionally containing Linux user credentials) or by providing a completely different (and separate) mechanism.

The resulting assortment of authentication mechanisms was unmanaged. In 1995, Sun introduced a mechanism called Pluggable Authentication Modules (PAM). PAM provided a common set of authentication APIs that could be used by all application developers, as well as an administrator-configurable server element that allowed various "pluggable" authentication schemes. The use of the PAM APIs for authentication and the Name Server Switch (NSS) APIs for looking up user information has allowed Linux application developers to write less code and allow Linux administrators to manage and configure the authentication process from one place.

Most releases of Linux came with several PAM authentication modules, including modules that support both LDAP directory authentication and Kerberos authentication. These modules can be used to authenticate against Active Directory, but there are significant limitations to this, which I will discuss later in this article.

Samba and Winbind

Samba is an open source project that aims to integrate between Windows and Linux environments. Samba contains components that give Linux computers access to Windows file and print services, and provide Linux-based services that mimic Windows NT 4.0 domain controllers. Using the Samba client components, Linux computers can use Windows authentication services provided by Windows NT domain controllers and Active Directory.

Of particular interest to us is the part of Samba called Winbind. Winbind is a daemon ("service" in Windows terms) that runs on Samba clients and acts as a proxy for communication between PAM and NSS running on a Linux machine on the one hand, and Active Directory running on a domain controller on the other. Specifically, Winbind uses Kerberos to authenticate with Active Directory and LDAP to obtain information about users and groups. Winbind also provides Additional services, such as the ability to discover a domain controller using an algorithm similar to DCLOCATOR in Active Directory, and the ability to reset Active Directory passwords by contacting a domain controller using RPC.

Winbind solves a number of issues that remain with just using Kerberos with PAM. In particular, instead of hard-coding a domain controller for PAM authentication, Winbind selects a domain controller by looking up DNS locator records, similar to the Microsoft DC LOCATOR module.

Three Authentication Strategies

Given the availability of LDAP, Kerberos, and Winbind on Linux computers, there are three different implementation strategies that can be applied to enable a Linux computer to use Active Directory for authentication.

Using LDAP Authentication The simplest but least satisfactory way to use Active Directory for authentication is to configure PAM to use LDAP authentication, as shown in rice. one. Although Active Directory is an LDAPv3 service, Windows clients use Kerberos for authentication (with NTLM as a fallback) rather than LDAP.

With LDAP authentication (referred to as LDAP bind), the username and password are sent in clear text over the network. This is unsafe and unacceptable for most purposes.

Fig 1. Active Directory authentication using LDAP

The only way to mitigate this risk of open credential passing is to encrypt the client-Active Directory communication channel using something like SSL. This is definitely possible, but puts an additional burden of managing SSL certificates on both the domain controller machine and the Linux machine. In addition, using the LDAP PAM module does not support changing reset or expired passwords.

Using LDAP and Kerberos Another strategy for using Active Directory for Linux authentication is to configure PAM to use Kerberos authentication and NSS to use LDAP to look up user and group information, as shown in rice. 2. This scheme has the advantage of being relatively secure, and it uses the "built-in" features of Linux. However, it does not use the DNS service location (SRV) records published by Active Directory domain controllers, which forces a specific set of domain controllers to be checked to authenticate against them. It also doesn't provide a particularly intuitive way to manage Active Directory expiring passwords or, until recently, adequate group member lookups.


Fig 2. Active Directory authentication using LDAP and Kerberos

Using Winbind A third way to use Active Directory for Linux authentication is to configure PAM and NSS to make calls to the Winbind daemon. Winbind will translate the various PAM and NSS queries into appropriate Active Directory calls using LDAP, Kerberos, or RPC, whichever is most appropriate. On the rice. 3 a good example of this strategy is given.


Fig 3. Active Directory Authentication Using Winbind

Our implementation plan

Improved integration with Active Directory made me choose Winbind on Red Hat Enterprise Linux 5 (RHEL5) for my Linux-Active Directory integration project. RHEL5 is the current version of the commercial release of Red Hat Linux and is quite popular in enterprise data centers.

In order for RHEL5 to authenticate through Active Directory, the following five separate steps are essentially required:

  1. Find and download the appropriate Samba package and other dependent components.
  2. Build Samba.
  3. Install and configure Samba.
  4. Set up Linux, specifically PAM and NSS.
  5. Set up Active Directory.

The next few sections of this article describe these steps in more detail.

Finding the right programs

One of the biggest differences between Linux and Windows is that Linux consists of a small operating system kernel and a huge collection of separately downloadable and installable components. This makes it possible to create very carefully chosen Linux distributions that are optimal for certain tasks, but it also makes it very difficult to set up and manage a server. Different distributions deal with this in different ways. Red Hat (and its non-commercial sister Fedora) use the Red Hat Package Manager (RPM) to install and manage these components.

The Linux components for Red Hat come in two forms. RPM files contain binaries that have been precompiled and built for a particular combination of component version, Linux release, and CPU architecture. So you can download and install, for example, version 1.3.8-5 of the Common UNIX Printing System (CUPS) built for Fedora version 10 running on an Intel x86 architecture CPU. Given the presence of a dozen various architectures CPU, over 100 Linux releases, and thousands of packages and versions, you can see there is an incredible amount of binary RPM packages to choose from.

RPM source files, on the other hand, contain the real source for this package. The user is expected to download and install the source files, configure build options, and then compile and link the binaries himself. The idea of ​​building your own operating system is intimidating for a Windows professional accustomed to installing what Microsoft provides on a CD. Windows installation, but the package manager makes the process relatively painless and surprisingly reliable. The Samba group releases updates and security patches at a furious pace; in July and August 2008 alone, there were four releases of Samba 3.2, containing over 100 bug fixes and security fixes in total. For this project, I have downloaded the source files for the latest stable version of Samba, version 3.0.31.

Why did I download the Samba source code instead of a pre-compiled set of binaries? At first, of course, I tried to do the first. But after many hours with the debugger, I discovered that the binaries I downloaded were not built in the correct way to support Active Directory authentication. In particular, the code that supports Linux ID mapping in Active Directory was disabled in the default builds, so I had to rebuild Samba with the proper build options. I will go into the issue of ID matching in detail below.

Even though Linux itself is a small kernel, the Red Hat Enterprise edition comes pre-installed with many packages. This usually makes life a lot easier by allowing you to start with a complete and working operating system, but pre-installed packages sometimes conflict with programs that are supposed to be installed later.

I didn't include Samba in my Red Hat installation (Samba is usually installed by default) because I needed to use a newer version. However, more a new version Samba requires new versions of several other libraries and utilities that have already been installed. The problems associated with this addiction are annoying, but they can be easily solved using RPM.

There are many websites that host binary RPM packages. The one I used (simply because I found it first) is called PBONE and is located at rpm.pbone.net. It has a handy way to find packages and has all the binaries that were required for my CPU architecture (i386) and operating system editions (Red Hat Enterprise Linux 5/Fedora 7 and 8).

I had to download and update the packages listed on rice. 4 to build and install latest version Samba 3.0 (there is an even newer 3.2 version tree that I haven't tried). Note that all of these packages are for the Fedora Core (fc) release. The Red Hat distribution is based on the same source code as Fedora and is fully compatible with it. Packages built for Fedora Core 7 and later will run unchanged on RHEL5. Place the downloaded RPM files in the /usr/src/redhat/RPMS directory.

Rice. 4. Packages required to build and install Samba 3.0.31

Building Samba

The first step in building Samba is to download the correct RPM source package. I downloaded the Samba 3.0.31 source RPM package from the PBONE website. Next, place the downloaded RPM source file in /usr/src/redhat/SRPMS; this is the standard directory for source RPM packages during the build process.

Open a terminal session (a "command prompt window" in Windows terms) and navigate to the SRPMS folder. Once this is done, install the source package using the command as shown in rice. 5.


Rice. 5. Installing the Samba source RPM package

If you get an error warning "user mockbuild does not exist-using root", don't worry. This error indicates that the mock assembly utilities are not installed. The build process will work without them.

Next, navigate to the /usr/src/redhat/SPECS directory and edit the samba.spec file containing the Samba build options. Find the line that starts with "CFLAGS=" and make sure the "--with-shared-modules=idmap_ad,idmap_rid" option exists. This option ensures that the build process includes code that correctly converts Linux unique identifiers (UIDs) to Active Directory. On the rice. 6 given this option.


Rice. 6. with-shared-modules build option

Next, you may need to update some of the libraries on your computer in order to properly build and install Samba; it depends on which versions of the libraries are already installed. In my case, I had to install the packages listed on rice. 4 using the rpm -install command; in some cases, however, I had to use the --force option to overcome some of the dependency issues.

To build Samba, navigate to the /usr/src/redhat directory and run rpmbuild -bb SPECS/samba.spec as shown in rice. 7. This procedure will leave the new RPM file samba-3.0.31-0.i386 in the /usr/src/redhat/RPMS directory. We will install this RPM file later in the project.


Rice. 7. Creating a Samba RPM Binary

Configuring Linux Networking

To authenticate with Active Directory, the Linux computer must be able to contact the domain controller. For this to happen, three network settings must be configured.

First, it's important to make sure that the network interface on the Linux machine is properly configured, either by using DHCP or by assigning the appropriate IP address and netmask to it using the ifconfig command. In the case of RHEL5, you can configure networking by selecting it (Network) from the System | Administration (System | Administration), as shown in rice. eight.


Fig 8. Network configuration

Next, make sure the DNS name resolution service for the Linux computer is set to use the same DNS name server that the domain controllers use; in most cases, this is the domain controller in the domain to which you want to join the Linux machine, assuming you are using Active Directory integrated DNS. The DNS resolver is configured on the DNS tab of the same network configuration utility that was used to configure the network, as shown in rice. 9.


Fig 9. Installing a basic DNS resolver

Finally, after completing these steps, you need to set the name of the Linux computer to reflect its name in the domain. Although this name can be set using the network setup application, this does not always seem to work properly.

Instead, modify the /etc/hosts file and add an entry under the localhost.localdomain entry in the form <полное доменное имя> <имя компьютера>. (Example: "10.7.5.2 rhel5.linuxauth.local linuxauth".) I should note that if this is not done, the wrong machine object will be created in the directory after the Linux machine is joined to the domain.

Configuring Time Synchronization in Linux

The Kerberos protocol relies on authentication systems having clocks with sufficiently high timing accuracy. By default, Active Directory allows a maximum time variance of five minutes. To ensure that Linux systems and DC system clocks stay within this value, Linux systems must be configured to use the DC NTP protocol service.

Next, on the Linux server, run the Date & Time utility from the System | Administration and select the NTP protocol tab. Check the Enable Network Time Protocol checkbox and add the IP address of the domain controller that you want to use as the network time source. Note that this should typically be a domain controller in a domain that holds the Primary Domain Controller (PDC) Impersonator FSMO role. On the rice. 10 Here is an example of setting up a network time source for Linux.


Figure 10. Setting the NTP protocol

Setting up PAM and NSS

PAM and NSS provide a means of connecting a Linux application, such as the desktop, and Winbind. Like many Linux services, PAM and NSS are configured via text files. First we'll look at setting up PAM.

PAM provides four authentication-related features to applications that use it. The authenticator allows the application to determine who is using it. The Accounts Tool provides account management features that are not directly related to authentication, such as limiting logon times. The Password Tool provides mechanisms for requesting and managing passwords. The session tool performs user-specific install and uninstall tasks for the application, such as logging or creating files in a specific user's category.

Red Hat's PAM configuration files are stored in the /etc/pam.d directory, which will contain a text file for each application that uses PAM for authentication. For example, the file /etc/pam.d/gdm contains information about configuring PAM for Gnome Desktop Manager (GDM), Red Hat's default windowing environment. Each PAM configuration file contains several lines, each of which defines some aspect of the PAM authentication process. On the rice. eleven shows the contents of the PAM configuration file for GDM.


Rice. eleven. RAM configuration file for Gnome Desktop Manager

Each entry in the PAM configuration file is of the form<группа управления> <элемент управления> <модуль> <параметры>, where<группа управления>corresponds to the facility to which the setting entry refers: authentication, accounts, passwords, or sessions. Control keywords described on rice. 12 tell PAM how to process the configuration entry. The third column of the file contains the name of the PAM shared library in the /lib/ security directory. Shared libraries contain dynamically loaded executable code, similar to a DLL on Windows. Additional terms after the module name are parameters passed by the PAM module to the shared library.

Rice. 12. PAM control keywords

Keyword

Description

Required If the module succeeds, then PAM continues to calculate the remaining entries for the control group, and the results will be determined by the results of the remaining modules. If not, then PAM will continue evaluating but return a failure to the calling application.
Requisite If the module succeeds, PAM continues to evaluate the management group entries. If not, then PAM will return to the calling application without further processing.
Sufficient ("Enough") If the module succeeds, then PAM will return a successful result to the calling application. If not, then PAM will continue to evaluate, but the results will be determined by subsequent modules.
optional PAM ignores the results of a module unless it is the only module specified for the management group.
Include PAM includes the contents of the referenced PAM configuration file, as well as the processes and entries it contains.

You may notice that each management group has multiple entries. PAM processes entries in order by calling the named module. The module then returns success or failure, and PAM takes action based on the control keyword.

You may notice that the PAM configuration file for GDM has system-auth for all management groups. This is the way PAM sets the default authentication behavior for GDM. By modifying system-auth, you can modify this behavior for all applications that have a system-auth file in their PAM settings. The default system-auth file is shown in rice. thirteen.


Rice. thirteen. PAM module system-auth file

The Name Resolution Block Switch (NSS) module hides specific information about the systems data store from the application developer, much like PAM hides authentication details. NSS allows an administrator to specify how the system's databases are stored. In particular, the administrator can specify how username and password information is stored. Since we want applications to look up user information in Active Directory using Winbind, we need to change the NSS setting to show this.

Red Hat includes a small graphical utility for configuring PAM and NSS called system-config-authentication. It takes care of most (but not all) changes that need to be made to the system-auth and nss.conf files.

Run the system-config-authentication application and you should see a dialog similar to the one shown in rice. 14. Check the Winbind box on both the User Information tab, it configures the nss.conf file, and the Authentication tab, it modifies the system-auth file.


Rice. 14. systemconfig-authentication dialog

Click the Configure Winbind button and the dialog shown in rice. 15. Enter the name of the domain that you want to authenticate users to in the Winbind Domain field and select "declarations" as the security model. Enter the DNS domain name of the Active Directory domain in the Winbind ADS Realm field. In the Winbind Domain Controllers field, enter either the name of the domain controller that you want to authenticate with for this Linux system, or an asterisk to indicate that Winbind should select a domain controller when querying DNS SRV records.


Rice. 15. Customizing the Winbind Dialog

Choose the appropriate default command interpreter that Active Directory users should have; in this case, I chose bash (Bourne-again Shell). Do not attempt to use the "Join Domain" button at this stage. The computer will be joined to the domain later.

One more additional change needs to be made to /etc/pam.d/system-auth after it has been modified to support Winbind. When a Linux user logs in, the system requires the user to have a home directory. The home directory contains many user-specific settings and customization items much like the Windows registry. The problem here is that since users are created in Active Directory, Linux will not automatically create the user's home directory. Luckily, PAM can be configured to do this as part of the session setup.

Open the /etc/pam.d/system-auth file, then scroll down and paste the line "session optional map_mkhomedir.so skel=/etc/skel umask=0644" before the last line in the session section (see below). rice. sixteen). This line instructs PAM to create a home directory for the user if one does not already exist. It will use /etc/skel as the skeleton template and assign a permissions mask of 0644 (read/write for owner, read for primary group, and read for everyone else) to the new folder.


Rice. sixteen. Creating a Home Directory for Users

Installing and configuring Samba

To install the newly created Samba binaries, navigate to the /usr/src/redhat/RPMS directory. All RPM files created by the rpmbuild command will appear in this directory. Please note that Samba includes binaries that allow Linux client access the Windows file share (or Samba), as well as the code that allows the Linux system to act as a Windows file server, Windows print server, and Windows NT 4.0 style domain controller.

To allow Linux to authenticate against Active Directory, all this is not needed; common Samba files and Samba client binaries are sufficient. These files are split into two RPM files for our convenience: samba-client-3.0.31-0.i386.rpm and samba-common-3.0.31-0.i386.rpm. Install the RPM files using the rpm --install command. Here's an example: rpm --install samba-common-3.0.31-0.i386.rpm. (Note that you need to install the RPM -common file before doing this.)

After installing the Samba client binaries, you need to modify the default Samba setup to ensure that Winbind handles Active Directory authentication properly. All information about configuring Samba (both client and server) can be found in the smb.conf text file, which is located by default in the /etc/samba directory. Smb.conf can contain a huge number of configuration options, and a full account of its contents is beyond the scope of this article. See the samba.org website and the Linux help for more information about smb.conf.

The first step in configuring Winbind is to use Active Directory for authentication. The security model in smb.conf needs to be set to "declarations". The system-config-authentication utility should already have set this up by itself, but it never hurts to check. Edit the smb.conf file and find the section labeled Domain Member Options. Find the line that starts with "security" and make sure it reads "security=ads". The next configuration step defines how Winbind maps Windows security principals, such as users and groups, to Linux identities, and this requires a bit more explanation.

ID matching problem

There is one big issue with Active Directory Linux user authentication that I haven't mentioned so far - the problem of unique identifiers (UIDs) for users and groups. Internally, neither Linux nor Windows refers to users by their real names, using a unique internal identifier instead. Windows uses Security Identifiers (SIDs), which are a variable-length structure that uniquely identifies each user in a Windows domain. The SID also contains a unique domain identifier so that Windows can distinguish between users in different domains.

The Linux schema is much simpler. Each user on a Linux computer has a UID, which is simply a 32-bit integer. But the scope of the UID is limited to the computer itself. There is no guarantee that a user with UID 436 on one Linux machine is identical to a user with UID 436 on another Linux machine. As a consequence, the user needs to connect to every computer that he wants to access, which is an undesirable situation.

Linux network administrators usually solve this problem by providing network authentication using either the Network Information System (NIS) or a shared LDAP directory. The network authentication system provides a UID for the user, and all Linux computers using this system will use the same user and group IDs. In this situation, I use Active Directory to provide unique user and group IDs.

To solve this problem, I use two strategies. The first (and also the most obvious) strategy is to create a UID for each user and group, and store that UID with the appropriate object in Active Directory. When used, when Winbind authenticates a user, I can look at the user's UID and provide it to Linux as the user's internal ID. Winbind refers to this schema as an Active Directory ID Mapping (or idmap_ad). On the rice. 17 shows the Active Directory identity mapping process.


Figure 17. Active Directory ID Mapping Process

The only downside to Active Directory identity mapping is that it needs to provide a mechanism for each user and group to have an identity and be unique within a forest. More information can be found in the "Configuring Active Directory for Active Directory ID Mapping" sidebar.

Fortunately, there is another strategy for matching identifiers that involves much less administrative overhead. Recall that a Windows SID uniquely identifies a user within a domain, as well as the domain itself. The portion of a SID that uniquely identifies a user within a domain is called a relative identifier (RID) and is actually a 32-bit integer. So Winbind can simply extract the RID from the SID when the user logs in and then use the RID as the unique internal UID. Winbind refers to this strategy as RID or idmap_rid id mapping. On the rice. eighteen shows how RID mapping actually works.


Rice. eighteen. RID Mapping

RID mapping has the advantage of zero administrative overhead, but it cannot be used in a multi-domain environment due to the possibility of users in multiple domains having the same RID value. But if you have a single Active Directory domain, RID mapping is the right choice.

To configure the ID mapping policy in Winbind, edit the /etc/samba/smb.conf file again and add the line "idmap backend = ad" to use the Active Directory mapping strategy, or "idmap backend = rid" to use the RID mapping strategy. Make sure there are no other lines that indicate the matching strategy in the file.

There are a number of other configuration options that need to be added to the smb.conf file for Winbind. Even if PAM is set to create a home directory for each user when they log in, Winbind needs to be told what that name is. We do this by adding the line "template homedir = /home/%U" to smb.conf (see below). rice. nineteen). This tells Winbind that the home directory for each user authenticated with Active Directory will be /home/<имя пользователя>. Don't forget to create the /home directory first.


Rice. nineteen. Specifying the home directory name

Domain Join and Login

Now that the network, PAM, NSS, and Samba Winbind are set up correctly, it's time to join the Linux machine to the domain. This can be done with net commands Samba programs. At the command interpreter prompt, run "net ads join -U<имя администратора>". Replace<имя администратора>an account name that has sufficient authority to join computers to the domain.

The net command will prompt for the user's password. If all goes well, it will join the correct computer in the domain. You can use the Active Directory Users and Computers snap-in to locate the computer account that you created.

You can test the bind status using the Winbind test tool, wbinfo. If you run wbinfo -t, the trust relationship between the computer and the domain will be tested. wbinfo -u will list all users in the domain and wbinfo -g will list all groups.

If the Linux machine is successfully joined to the domain, the next step is to try to log in with an Active Directory user account and password. Log out of the Linux computer and log in with an Active Directory username. If everything works correctly, then the ability to log in should be present.

Configuring Active Directory for the Active Directory ID Mapping Process

This information only applies to those using Active Directory identity mapping. Those who choose to use RID mapping can safely ignore this panel.

Before you can log in to a Red Hat server using an Active Directory account, you must make some changes to Active Directory itself. First, the Active Directory schema needs to be updated with the attributes that Winbind uses to store user information. When running on Windows Server 2003 R2, the schema is ready to use. In case there are more early version the Active Directory schema, it will have to be extended using the Microsoft Services for UNIX (SFU) package.

More information can be found at Services for UNIX on TechNet. SFU also includes an additional property page for Active Directory Users and a Microsoft Computers Management Console (MMC) snap-in to manage individual and group modifier information required by Linux.

Once the schema is set up properly, you need to provide Linux IDs for all users (and the groups they are members of) that can log into the Linux machine. This means that you need to define values ​​for the uidNumber and gidNumber attributes for the users and groups that can log into the Linux machine. But there are some requirements for these attributes to keep in mind:

  1. Linux requires a UID for every user who authenticates himself. Because we need to manage user information in Active Directory, each user account that will log in to the Linux machine must have a unique uidNumber attribute. The specific value used for uidNumber is not significant, but it must be unique across all users who can log into the Linux machine.
  2. Every Linux user must also have a default group ID, so every Active Directory user that logs into a Linux computer requires a value for the gidNumber attribute as well. This value does not have to be unique among users, but it must uniquely identify the group.
  3. Each group in Active Directory must have a unique value for its gidNumber attribute. Strictly speaking, it's OK for groups to have no value for the gidNumber attribute, but when authenticating a user, Winbind expects each group to which the user belongs to have a unique gidNumber value. It's probably much easier to just make sure each group has unique value gidNumber.
  4. Winbind expects every user it finds in Active Directory to be a member of the Domain Users group, so it also expects the Domain Users group to have values ​​for its gidNumber attribute.

What if it doesn't work?

Setting up a Linux computer to authenticate with Active Directory and through Winbind is not a trivial project. There are a lot of things to tweak and a lot of things that can go wrong. The fact that each version of Linux or Samba is slightly different in its capabilities does not make this task any easier. But there are a number of sources containing information about what is happening.

The first is the Linux system log file, located in /var/log/messages. Samba will log significant events to this file, such as missing files or bad configuration. In addition to the system log file, Samba and Winbind have their own log files. They can be found in /var/log/samba and will provide the user with some additional information.

You can increase the verbosity (and length) of the log messages produced by Winbind by modifying its startup script to set the debug level. Edit the command shell script /etc/init.d/winbind and add "-d 5" to the winbind command. This will increase the debug level to 5 (valid values ​​are 1 to 10), which will cause winbind to generate more verbose error messages.

If Winbind manages to communicate with a domain controller, you can capture network packets using a utility like Netmon 3.1. This allows you to analyze exactly what Winbind is trying to do. You can also examine the Windows Security log on the domain controller, which will record authentication attempts.

Now that it's up and running, what do we have?

If everything works smoothly, it is now possible to log in to Linux systems using credentials supported by Active Directory. This is a huge improvement over managing identity locally on a Linux machine or using an insecure system like NIS. This allows you to centralize user management in one identity store: Active Directory.

But there are some things missing here that could make this solution really useful. First, getting technical support here it's a matter of luck. Many Linux organizations don't know much about Active Directory, and the support you can get from the Linux community depends entirely on who read your post and how they feel about it today.

In addition, the Samba package does not provide porting or deployment tools. If you have existing Linux accounts that have associated user IDs and permissions, you must manually ensure that they retain their UIDs when they are migrated to Active Directory.

Finally, one of the most important applications of Active Directory, Group Policy, is not available with Samba, although work is in progress. While a Linux system can be joined to Active Directory using Samba, it cannot be controlled using Group Policy.

Third Party Solutions

Authenticating Linux computers with Active Directory is obviously a good thing, but creating your own solution with Samba Winbind is tedious, if not just a nightmare. Readers might think that some resourceful software vendor should come up with an easier-to-use solution, and they would be right.

Four commercial software vendors have developed easy-to-install and use versions of what I have demonstrated in this article. They provide code and porting tools for almost all popular versions of Linux, UNIX, and Apple Macintosh, as well as support for managing Linux computers using Group Policy.

The four companies are: Centrify, Likewise Software, Quest Software and Symark. All four vendors provide similar features, including group policy management across a wide range of Linux editions. Likewise Software has recently opened-sourced its implementation, called Likewise Open, although its Group Policy component remains a commercial product. Likewise Open will be available for several major Linux releases. (Let me tell you a secret: while I was writing this article, my company, NetPro, was acquired by Quest Software.)

Does it make sense to build your own authentication system using Samba and Winbind when commercial options are available? If the budget does not include money for integration programs, then using Samba with its open source has the advantage of being free. You also get all the source code, which can be a tempting perk. But porting existing Linux machines and their existing UIDs is a thorny issue.

On the other hand, if you want to save time on implementation and installation, or if you have existing Linux machines that need to be migrated, or if you would like an expert answer to your question, then looking at commercial solutions makes sense. If you need to manage Group Policy, there is no alternative to them.

But any solution that integrates Linux authentication with Active Directory reduces the effort required to manage multiple user accounts, improves system security, and provides a single identity store for management and auditing. And those are good enough reasons to try it.

Gil Kirkpatrick

Two-factor authentication (2FA) is an authentication method that requires several pieces of information to log into an account or device. In addition to the username/password combination, 2FA requires the user to enter additional information, such as a one-time password (OTP, such as a six-digit verification code).

In general, 2FA requires the user to enter different types of information:

  • Something the user knows (like a password)
  • Something that the user has (for example, a confirmation code generated by a special application - an authenticator).

2FA is a subset of multi-factor authentication (MFA). The MFA method, in addition to what the user knows and has, requires something that it is. These are biometric data: fingerprint or voice recognition, etc.

2FA helps to secure the authentication process for a particular service or device: even if the password has been compromised, the attacker will also need a security code, and this requires access to the user's device that hosts the authenticator app. For this reason, many online services offer the option to enable 2FA for user accounts in order to increase the security of accounts at the authentication level.

In this tutorial, you will learn how to set up 2FA using the Google PAM module for a non-root user on Ubuntu 18.04. Since you are setting up 2FA for a non-root user, in the event of a lockout, you will still be able to access the computer from your root account. The instructions in the manual are general enough that they can be applied to both servers and desktop installations, both local and remote.

Requirements

  • Ubuntu 18.04 server or desktop environment. Ubuntu 18.04 server needs to be configured with .
  • The authenticator installed on the mobile device (for example, Google Authenticator or Authy). With it, you will scan security QR codes.

Step 1: Installing the Google PAM Module

To set up 2FA on Ubuntu 18.04, you need to install the Google PAM module for Linux. Pluggable Authentication Module (PAM) is the authentication mechanism used by Linux. The Google PAM module will allow your user to perform 2FA authentication using Google generated OTP codes.

First login as the sudo user you created during initial setting servers:

ssh [email protected] _server_ip

Update the Ubuntu Package Index to get the latest authenticator:

sudo apt-get update

After updating the repositories, install the latest version of the PAM module:

sudo apt-get install libpam-google-authenticator

This is a very small package without any dependencies, so it will take a few seconds to install. In the next section, we will set up 2FA for the sudo user.

Step 2: Setting up two-factor authentication

Now that you've installed the PAM module, run it to generate a QR code for the logged in user. This will generate the code, but the Ubuntu environment won't need 2FA until you enable it.

Run the google-authenticator command to start and configure the PAM module:

google-authenticator

The command will ask you some configuration questions. She will first ask if you want the tokens to be time limited. Timed auth tokens expire after a certain interval (default is 30 seconds on most systems). Timed tokens are more secure than non-timed tokens, and most 2FA implementations use them. You can choose any option here, but we recommend choosing Yes and using time-limited authentication tokens:

Do you want authentication tokens to be time-based (y/n) y

By answering y to this question, you will see several lines of output in the console:

  • QR Code: This is the code that needs to be scanned with the authenticator app. Once you have scanned it, the app will create a new OTP every 30 seconds.
  • Secret Key: This is an alternative way to set up an authentication application. If you are using an app that does not support QR scanning, you can enter a secret key to set up an authenticator.
  • Verification code: This is the first six-digit code that this particular QR code generates.
  • Emergency scratch codes. these are one-time tokens (also called reserve codes), they will allow you to pass 2FA authentication if you lose the authenticator device. Keep these codes in a safe place to avoid account suspension.

Once you've set up your authenticator app and saved your backup codes in a safe place, the program will ask if you want to update the configuration file. If you select n, you will need to run the setup program again. Type y to save changes and continue:

Do you want me to update your "~/.google_authenticator" file (y/n) y

Next, the program will ask if you want to prevent the use of authentication codes more than once. By default, you can only use each code once, even if 30 seconds have not passed since it was created. This is the safest choice because it prevents replay attacks from an attacker who somehow managed to get your used verification code. For this reason, it is better to prohibit the use of codes more than once. Answer y to prevent multiple uses of the same token:

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

Then you need to specify if you want the auth tokens to be accepted some time after they expire normal term actions. The codes are very time sensitive, and therefore they may not work if your devices are not synchronized. This option works around this issue by extending the default verification codes expiration time so that authentication codes are accepted anyway (even if your devices are temporarily out of sync). It is best to make sure that the time on all your devices is synchronized, as the yes answer will slightly reduce the security of the system. Answer n to this question to keep the token from expiration:

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with the poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) n

The last question is whether you want to enable a limit on the number of login attempts. This will prevent the user from making more than three failed login attempts within 30 seconds, which will increase system security. Enable this restriction by answering y:

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

You have set up and generated 2FA codes using the PAM module. Now you need to enable 2FA in your environment.

Step 3: Activating 2FA in Ubuntu

The Google PAM module now generates 2FA codes for your user, but Ubuntu does not yet know that it needs to use the codes in the authentication process. At this point, you need to update your Ubuntu configuration to enable support for 2FA tokens in addition to basic authentication.

There are two ways here:

  1. You can require two-factor authentication every time a user logs in and every time a user requests sudo rights.
  2. You can only require 2FA during login, then only the user's password will be required when asking for sudo rights.

The first option would be ideal for a general environment where it is desirable to secure any action that requires sudo privileges. The second approach is more practical for a local desktop environment where you are the only user on the system.

Note A: If you enable 2FA on a remote machine that you access via SSH, you will need to complete steps two and three of the manual before proceeding. The rest of the steps in this tutorial apply to all Ubuntu installations, but remote environments require additional configuration to make the SSH service aware of 2FA.

If you are not using SSH to access your Ubuntu installation, you can skip ahead to the rest of the steps in this tutorial.

2FA prompt on login and sudo elevation

In order for the system to use 2FA during login and subsequent privilege escalation requests, you need to edit the /etc/pam.d/common-auth file by adding a line to the end of the existing file.

The common-auth file applies to all authentication mechanisms in the system, regardless of the environment used. It also applies to authentication requests that occur after a user has logged in, such as during a prompt for sudo rights when installing a new package from a terminal.

Open file:

sudo nano /etc/pam.d/common-auth

Add to the end of the file:

...
# and here are more per-package modules (the "Additional" block)
session required pam_unix.so


This line enables Ubuntu's authentication system to support 2FA when logging in with the Google PAM module. The nullok option allows existing users to log in even if they have not set up 2FA authentication for their account. In other words, users who have set up 2FA will be required to enter an authentication code the next time they log in, while users who have not run the google-authenticator command will be able to log in with their standard credentials until they set up 2FA.

Save and close the file.

2FA prompt only when logged in

If you want 2FA to be requested only when logging into a desktop environment, you need to edit the configuration file of the desktop manager you are using. The name of the configuration file is usually the same as the name of the desktop environment. For example, the configuration file for gdm (the default Ubuntu environment since Ubuntu 16.04) is /etc/pam.d/gdm.

In the case of a headless server (which is virtual server), instead you need to edit the /etc/pam.d/common-session file. Open the appropriate file depending on your environment:

sudo nano /etc/pam.d/common-session

Add the highlighted lines to the end of the file:

#
# /etc/pam.d/common-session - session-related modules common to all services
#
...
# # and here are more per-package modules (the "Additional" block)
session required pam_unix.so
session optional pam_systemd.so
# end of pam-auth-update config
auth required pam_google_authenticator.so nullok

Ubuntu will now require 2FA when a user connects to the system via the command line (locally or remotely via SSH), but this will not apply to running commands with sudo.

You have configured Ubuntu to support 2FA. Now it's time to test the configuration and make sure that when you log into your Ubuntu system, you will be prompted for a security code.

Step 4: Testing Two-Factor Authentication

Previously, you set up 2FA to generate codes every 30 seconds. Now try to login to your Ubuntu environment.

First, logout and log back into your Ubuntu environment:

ssh [email protected] _server_ip

If you are using password-based authentication, you will be prompted for the user's password:

Note: If you are using certificate authentication on the remote server, you will not be prompted for a password. The key will be transferred and accepted automatically. You will only need to enter a confirmation code.

Enter the password, after which you will be prompted to enter the 2FA code:

Verification code:

After that you will be logged in:

[email protected] _server_ip: ~#

If 2FA was enabled for login only, you will no longer need to enter 2FA verification codes until your session ends or you manually log out.

If you have enabled 2FA via the common-auth file, you will be prompted to specify it as well on every request for sudo privileges:

[email protected] _server_ip: ~# sudo -s
sudo password for 8host:
Verification code:
[email protected] _server_ip:

You have verified that the 2FA configuration is working properly. If something went wrong and the system didn't prompt you for verification codes, go back to the third section of the guide and make sure you've edited the correct Ubuntu authentication file.

5: Preventing 2FA blocking

In case of loss or destruction of a mobile device, it is important to provide methods Reserve copy to regain access to your 2FA enabled account. When you set up 2FA for the first time, you have several options for regaining access after being blocked:

  • Keep a backup copy of your secret configuration codes in a safe place. You can do it manually, but some authentication apps like Authy provide code backup features.
  • Save the recovery codes in a safe place outside of a 2FA-enabled environment that can be accessed if needed.

If for some reason you do not have access to the backup options, you can try to restore access to local environment or to a remote server with 2FA support in another way.

Step 6: Restoring Access to the Local Environment (Optional)

If you have physical access to the machine, you can boot into recovery mode to disable 2FA. The recovery mode is a target type (similar to a runlevel) in Linux that is used to perform administrative tasks. You will need to edit some settings in GRUB in order to enter recovery mode.

To access GRUB, first restart your computer:

When the GRUB menu appears, make sure the Ubuntu entry is highlighted. This is the default 18.04 install name, but it might be different if you manually changed it after installation.

Then press the e key on your keyboard to edit the GRUB configuration before booting your system.

Go to the end of the file that appears and find the line that starts with linux and ends with $vt_handoff. Go to the end of this line and add systemd.unit=rescue.target. Make sure you leave a space between $vt_handoff and systemd.unit=rescue.target. This will allow the Ubuntu machine to boot into recovery mode.

After making changes, save the file using the keyboard shortcut Ctrl + X. Your machine will reboot and you will be in command line. Press Enter to enter recovery mode.

Once at the command line, open the Google Authenticator configuration file located in the home directory of the blocked user.

nano /home/8host/.google-authenticator

The first line in this file is the user's private key, which is used to set up the authenticator.

Now you have two options:

  1. You can copy the private key and set up the authenticator.
  2. If you want to start with a clean slate, you can remove the ~/.google-authenticator file entirely to disable 2FA for that user. After logging in again, you will be able to set up 2FA again and get a new secret key.

In any case, you can restore the system after a 2FA blocking in a local environment using the GRUB bootloader. Next, we will explain how to restore access to a blocked remote environment.

Step 7: Restoring Access to the Deleted Environment (Optional)

If your sudoer account is locked in a remote environment, you can temporarily disable or reconfigure 2FA using the root user.

Login as root user:

ssh [email protected] _server_ip

After logging in, open the file Google settings Authenticator, which is located in the home directory of the blocked user:

sudo nano /home/8host/.google_authenticator

The first line in this file is the user's private key, which you need to set up the authenticator.

Now you have two paths:

  1. If you want to set up a new or erased device, you can use the secret key to reconfigure the authenticator app.
  2. If you want to start with a clean slate, you can delete the /home/8host/.google_authenticator file completely to disable 2FA for that user. After logging in as a sudo user, you will be able to set up 2FA again and get a new private key.

With any of these options, you will be able to recover from an accidental 2FA block using the root account.

Conclusion

In this tutorial, you set up 2FA on an Ubuntu 18.04 machine. Two-factor authentication provides an additional layer of account and system security. In addition to the standard credentials, you will also need to enter an additional verification code to sign in. This makes it impossible for unauthorized access to your account, even if an attacker manages to get your credentials.

Tags: ,

If you're a Linux administrator and want to keep your servers and desktops as secure as possible, you've probably thought about using two-factor authentication. In general, it is highly recommended for everyone to set it up, since two-factor authentication makes it much more difficult for attackers to gain access to your machines.

Linux allows you to set up a computer so that you cannot log in to the console, desktop, or through Secure Shell without a two-factor authentication code tied to this machine. Consider the entire setup process for Ubuntu system Server 16.04.

Introduction

One thing to keep in mind before you start is that once you set up two-factor authentication, you won't be able to access your computer without third party-generated codes. Each time you want to log in, you will need either your smartphone or emergency codes, which you can set up along the way.

We will need a Linux server or desktop. Make sure the system is up to date and your data is backed up in case of unforeseen circumstances. To create two-factor codes, we will use third party application, such as Authy or Google Authenticator. We will conditionally use Google Authenticator, which must first be installed.

Installation

Log in to the system and follow these steps:

  1. Open a terminal window.
  2. Run the command: sudo apt install libpam-google-authenticator.
  3. Type in the sudo password and press Enter.
  4. If you are prompted for confirmation, type "y" and press Enter.
  5. Wait for the end of the installation.

Now it's time to set up your computer for two-factor authentication.

Configuration

Return to the terminal window and enter the command: sudo nano /etc/pam.d/common-auth. Add the following line to the end of the file:

Save and close this file.

Now we need to set up Google Authenticator for each user who needs to have access to the system. To do this, return to the terminal window and, on behalf of the user to whom you plan to grant access, run the google-authenticator command. Here you have to answer a couple of questions.

The first question is "Do you want authentication tokens to be time-based (y/n)". Answer "y", you will be given a QR code. Open the two-factor app on your smartphone, add an account, and scan this QR code.

Figure 1. Received QR code

After you add the code, there are a few more questions left to answer:

  • Do you want me to update your "/home/jlwallen/.google_authenticator" file (y/n) - Do you want to update your /home/jlwallen/.google_authenticator file;
  • Do you want to disallow multiple uses of the same authentication token (y/n)? - Do you want to disable the ability to reuse the same token multiple times? This setting allows only one login every 30 seconds. If this option is enabled, your chances of detecting or even preventing a man-in-the-middle attack are increased.
  • Since the default value is 30 seconds, and the server and client time may differ slightly, it is possible to use some additional token. Therefore, if you are having sync issues, you can increase the window time to around 4 minutes. Do you want to do it? - Do you want to do so (y / n).
  • If you have any doubts about the protection of your computer from brute-force attacks, you can activate the speed limit for the authentication module. The default is no more than 3 login attempts every 30 seconds. Do you want to enable speed limiting? - Do you want to enable rate-limiting (y/n)

Answer yes to each question by typing "y" and pressing Enter.

SSH setup

The next step is to set up SSH to work with two-factor authentication. If you skip this step, you will not be able to login via SSH.

First you need to enable the PAM module. To do this, we type the command: sudo nano /etc/pam.d/sshd. With the file open, add the following line to the end of the file:

auth required pam_google_authenticator.so nullok

Save this file and then run the command: sudo nano /etc/ssh/sshd_config. In this file we find:

ChallengeResponseAuthentication no

And change to:

ChallengeResponseAuthentication yes

Save this file and restart sshd - sudo systemctl restart sshd.

Login

Before you log out, we strongly recommend that you open a new terminal window and try to log in via SSH. If this fails, repeat all the steps above, making sure you haven't missed anything. Once you have successfully logged in via SSH, you can log out of the session and log in again.

conclusions

Liked the article? Share with friends!
Was this article helpful?
Yes
Not
Thanks for your feedback!
Something went wrong and your vote was not counted.
Thank you. Your message has been sent
Did you find an error in the text?
Select it, click Ctrl+Enter and we'll fix it!