
ISBN: 9781495148200 (Paperback Edition) ASIN: B00WFEIS0S (eBook Edition) This book covers four major learning objectives: 1) a self-study guide for Red Hat exams RHCSA (EX200) and RHCE (EX300) for those who intend to take the two exams and pass them, 2) an in-class training guide for college students, 3) an on-the-job reference for administrators, programmers, and DBAs, and 4) an easy-to-understand guide for novice and non-RHEL administrators who plan to learn RHEL from scratch. This book is divided into two sections—RHCSA and RHCE—based on exam and learning objectives. The RHCSA section covers tasks that are intended for a single system, while the RHCE section covers those that require two or more networked systems. The book has twenty-five chapters altogether that are organized logically, keeping in mind the four learning objectives mentioned above. The RHCSA Section (chapters 1 to 13) covers the topics that will help the reader learn system administration tasks and prepare for the new RHCSA exam. Material presented includes local RHEL7 installation; general Linux concepts and basic commands; compression and archiving; text editor and online help; file and directory manipulation and security; processes, task scheduling, and bash shell features; package administration and yum repository; host virtualization, and network and automated installations; system boot, kernel management, systemd, and local logging; user and group administration; storage partitioning and file system build; AutoFS, swap, and ACLs; basic firewall and SELinux; network interface configuration and NTP/LDAP clients; and SSH and TCP Wrappers. The RHCE Section (chapters 14 to 25) covers the topics that will help the reader learn network administration tasks and prepare for the new RHCE exam. Material presented includes automation with shell scripting; network interface bonding and teaming; IPv6 and routing setups; remote time synchronization, firewalld, and Kerberos authentication; kernel tuning, resource utilization reporting, and network logging; block storage sharing with iSCSI; file storage sharing with NFS and Samba; web servers and virtual hosting; mail transfer and DNS; and MariaDB configuration and query. Each chapter in the book highlights the major topics and relevant exam objectives covered in that chapter and ends with a summary followed by review questions/answers and do-it-yourself challenge labs. Throughout the book, figures, tables, and screen shots have been furnished to support explanation. This book includes two sample exams for RHCSA and two for RHCE, and are expected to be done using the knowledge and skills gained from reading the material and practicing the exercises and labs. |

01. Exam objective "Configure key-based authentication" appears for both RHCSA and RHCE exams. Chapter 13 "Securing Access with SSH and TCP Wrappers" in the RHCSA section of the book addresses this and other SSH-related objectives for both RHCSA and RHCE exams. 02. Red Hat has removed the RHCE exam objective under SMB "Use Kerberos to authenticate access to shared directories" from the official list. The following instructions are presented to set up OpenLDAP and Kerberos servers and test them. OpenLDAP Server and Client Configuration Procedures: 1. OpenLDAP Server Configuration Using a Self-Signed Certificate (to be done on server2). 2. OpenLDAP Client Configuration and Testing (to be done on server1). 3. OpenLDAP Client Testing with AutoFS (to be done on server1). Kerberos Server and Client Configuration Procedures: 1. Kerberos Server Configuration (to be done on server2). 2. Kerberos Client Configuration (to be done on server1). 1. OpenLDAP Server Configuration: ======================================================= This exercise should be done on server2. 1. Install the required packages: # yum –y install openldap openldap-servers openldap-clients migrationtools 2. Generate an RSA encryption key called server2key.pem: # cd /etc/openldap/certs ; openssl genrsa –out server2key.pem Generating RSA private key, 1024 bit long modulus ...........++++++ .++++++ e is 65537 (0x10001) 3. Generate a CSR using the encryption key: # openssl req -new -key server2key.pem -out server2.csr . . . . . . . . Country Name (2 letter code) [XX]:CA State or Province Name (full name) []:ON Locality Name (eg, city) [Default City]:Toronto Organization Name (eg, company) [Default Company Ltd]:Home Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:server2.example.com Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 4. Generate a self-signed certificate using the encryption key and CSR: # openssl x509 –req –signkey server2key.pem –in server2.csr –out server2crt.pem Signature ok subject=/C=CA/ST=ON/L=Toronto/O=Home/CN=server2.example.com Getting Private key 5. Secure the key and set appropriate ownership: # chmod 0600 server2key.pem ; chown ldap:ldap server2key.pem 6. Generate a password hash for user user1. This user will be used to perform LDAP administration and query tasks. Create user1 if they do not exist. # su – user1 $ slappasswd New password: Re-enter new password: SSHA}e9RL5xcXjrAPiAIuWWrO1iobo86D81l2 7. Change to the /etc/openldap/slapd.d directory and open the cn=config.ldif file for edit: # cd /etc/openldap/slapd.d ; vi cn=config.ldif 8. Set the following three directives in the file as follows: olcTLSCACertificatePath: /etc/openldap/certs olcTLSCertificateFile: /etc/openldap/certs/server2crt.pem olcTLSCertificateKeyFile: /etc/openldap/certs/server2key.pem 9. Change to the cn=config directory and open olcDatabase={2}hdb.ldif file for edit: # cd cn=config ; vi olcDatabase={2}hdb.ldif 10.Modify the entries in the file as follows (copy the user1’s password hash and paste it to the olcRootPW directive): olcSuffix: dc=example,dc=com olcRootDN: cn=user1,dc=example,dc=com olcRootPW: {SSHA}SIj3y5MOUVpXdQjtoZiszJS/Z5uhaZ2f 11.Open the olcDatabase={1}monitor.ldif file for edit: # vi olcDatabase={1}monitor.ldif 12.Modify the highlighted entry in the file: olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth" read by dn.base="cn=user1,dc=example,dc=com" read by * none 13.Change to the OpenLDAP database directory and copy the DB_CONFIG.example file from /usr/share/openldap-servers directory over as DB_CONFIG and set owner and owning group to ldap: # cd /var/lib/ldap && cp /usr/share/openldap-servers/DB_CONFIG.example DB_CONFIG # chown ldap:ldap DB* 14.Add ldap service to the firewall configuration and reload the rule: # firewall-cmd --permanent --add-service=ldap;firewall-cmd --reload 15.Enable the LDAP server process slapd to start at subsequent system reboots: # systemctl enable slapd ln -s '/usr/lib/systemd/system/slapd.service' '/etc/systemd/system/multi-user.target.wants/slapd.service' 16.Start the slapd service: # systemctl start slapd 17.Add group called dba with GID 2015: # groupadd –g 2015 dba 18.Add user called ldapuser1 with primary group dba and password ldapuser123: # useradd –g dba ldapuser1 # echo ldapuser123 | passwd --stdin ldapuser1 Changing password for user ldapuser1. passwd: all authentication tokens updated successfully. 19.Change to the /etc/openldap directory and grep for ldapuser1 and dba information from /etc/passwd, /etc/shadow, and /etc/group files, and redirect the information to appropriate files: # cd /etc/openldap # grep ldapuser /etc/passwd > users # grep ldapuser /etc/shadow > shadow # grep dba /etc/group > groups 20.Change to the /usr/share/migrationtools directory. Make a backup of /usr/share/migrationtools/migrate_common.ph file and open to modify it. Comment out lines 43, 44, 46, 47, 49 – 54, 56, 57, 59, 60, and 62 – 67. Leave the entries for users and groups uncommented (lines 45, 48, 58, and 61). Modify lines 71 and 74 as indicated. # cd /usr/share/migrationtools # cp migrate_common.ph migrate_common.ph.org # vi migrate_common.ph #$NAMINGCONTEXT{'aliases'} = "cn=aliases";# Line 43 #$NAMINGCONTEXT{'fstab'} = "cn=mounts";# Line 44 $NAMINGCONTEXT{'passwd'} = "cn=users"; #$NAMINGCONTEXT{'netgroup_byuser'} = "cn=netgroup.byuser";# Line 46 #$NAMINGCONTEXT{'netgroup_byhost'} = "cn=netgroup.byhost";# Line 47 $NAMINGCONTEXT{'group'} = "cn=groups"; #$NAMINGCONTEXT{'netgroup'} = "cn=netgroup";# Line 49 #$NAMINGCONTEXT{'hosts'} = "cn=machines";# Line 50 #$NAMINGCONTEXT{'networks'} = "cn=networks";# Line 51 #$NAMINGCONTEXT{'protocols'} = "cn=protocols";# Line 52 #$NAMINGCONTEXT{'rpc'} = "cn=rpcs";# Line 53 #$NAMINGCONTEXT{'services'} = "cn=services";# Line 54 } else { #$NAMINGCONTEXT{'aliases'} = "ou=Aliases";# Line 56 #$NAMINGCONTEXT{'fstab'} = "ou=Mounts";# Line 57 $NAMINGCONTEXT{'passwd'} = "ou=People"; #$NAMINGCONTEXT{'netgroup_byuser'} = "nisMapName=netgroup.byuser";# Line 59 #$NAMINGCONTEXT{'netgroup_byhost'} = "nisMapName=netgroup.byhost";# Line 60 $NAMINGCONTEXT{'group'} = "ou=Group"; #$NAMINGCONTEXT{'netgroup'} = "ou=Netgroup";# Line 62 #$NAMINGCONTEXT{'hosts'} = "ou=Hosts";# Line 63 #$NAMINGCONTEXT{'networks'} = "ou=Networks";# Line 64 #$NAMINGCONTEXT{'protocols'} = "ou=Protocols";# Line 65 #$NAMINGCONTEXT{'rpc'} = "ou=Rpc";# Line 66 #$NAMINGCONTEXT{'services'} = "ou=Services";# Line 67 $DEFAULT_MAIL_DOMAIN = "example.com";# Line 71 $DEFAULT_BASE = "dc=example,dc=com";# Line 74 21.Execute the migrate_base.pl script to parse the modified migrate_common.ph script to generate foundation configuration and store the output in /etc/openldap/base.ldif file: # ./migrate_base.pl > /etc/openldap/base.ldif 22.Show the contents of the base.ldif file: # cat /etc/openldap/base.ldif dn: dc=example,dc=com dc: example objectClass: top objectClass: domain dn: ou=People,dc=example,dc=com ou: People objectClass: top objectClass: organizationalUnit dn: ou=Group,dc=example,dc=com ou: Group objectClass: top objectClass: organizationalUnit 23.Open the migrate_passwd.pl file and replace “/etc/shadow” with “/etc/openldap/shadow” to direct this script to use the shadow output we stored earlier in this file: # vi migrate_passwd.pl Replace /etc/shadow with /etc/openldap/shadow (line # 188) 24.Change to the /etc/openldap directory and generate user data in LDIF format to pass to the OpenLDAP server in a later step: # cd /etc/openldap # /usr/share/migrationtools/migrate_passwd.pl /etc/openldap/users > users.ldif # /usr/share/migrationtools/migrate_group.pl /etc/openldap/groups > groups.ldif 25.Add schema called cosine.ldif from the /etc/openldap/schema directory to the OpenLDAP database to support required LDAP objects before we are able to add user information to the database: # ldapadd -f /etc/openldap/schema/cosine.ldif -H ldapi:/// -Y EXTERNAL SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 adding new entry "cn=cosine,cn=schema,cn=config" 26.Add the base information to the OpenLDAP database. Enter the password for user1 when prompted. # ldapadd –W –D cn=user1,dc=example,dc=com –f base.ldif Enter LDAP Password: adding new entry "dc=example,dc=com" adding new entry "ou=People,dc=example,dc=com" adding new entry "ou=Group,dc=example,dc=com" 27.Add schema called nis.ldif from the /etc/openldap/schema directory to the OpenLDAP database to support additional required LDAP objects: # ldapadd -f schema/nis.ldif -H ldapi:/// -Y EXTERNAL SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 adding new entry "cn=nis,cn=schema,cn=config" 28.Add user and group information to the OpenLDAP database. Enter the password for user1 when prompted. # ldapadd –W –D cn=user1,dc=example,dc=com –f users.ldif adding new entry "uid=ldapuser1,ou=People,dc=example,dc=com" # ldapadd –W –D cn=user1,dc=example,dc=com –f groups.ldif adding new entry "cn=dba,ou=Group,dc=example,dc=com" 29.Verify the addition of all base and user entries in the OpenLDAP directory: # ldapsearch –x –b dc=example,dc=com . . . . . . . . # example.com dn: dc=example,dc=com dc: example objectClass: top objectClass: domain # People, example.com dn: ou=People,dc=example,dc=com ou: People objectClass: top objectClass: organizationalUnit # Group, example.com dn: ou=Group,dc=example,dc=com ou: Group objectClass: top objectClass: organizationalUnit # ldapuser1, People, example.com dn: uid=ldapuser1,ou=People,dc=example,dc=com uid: ldapuser1 cn: ldapuser1 objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount userPassword:e2NyeXB0fSQ2JHNnc2Q1RHVxJDlGNXV4cTROUG8vNkgvcTl6V2F2NjRqbHNGR2p qV2p4NTNyeTlia0lDbVFFc2xNTmJlNy9CQnNDQTl6dVJBT1RLVzRublpkd1ZyU0ZqN0QySUFTajAv shadowLastChange: 16567 shadowMin: 0 shadowMax: 99999 shadowWarning: 7 loginShell: /bin/bash uidNumber: 1001 gidNumber: 2015 homeDirectory: /home/ldapuser1 # dba, Group, example.com dn: cn=dba,ou=Group,dc=example,dc=com objectClass: posixGroup objectClass: top cn: dba userPassword:: e2NyeXB0fXg= gidNumber: 2015 . . . . . . . . 30.Verify the addition of only the user account: # ldapsearch –x –b dc=example,dc=com cn=ldapuser1 . . . . . . . . # ldapuser1, People, example.com dn: uid=ldapuser1,ou=People,dc=example,dc=com uid: ldapuser1 cn: ldapuser1 objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount userPassword:: e2NyeXB0fSQ2JHNnc2Q1RHVxJDlGNXV4cTROUG8vNkgvcTl6V2F2NjRqbHNGR2p qV2p4NTNyeTlia0lDbVFFc2xNTmJlNy9CQnNDQTl6dVJBT1RLVzRublpkd1ZyU0ZqN0QySUFTajAv shadowLastChange: 16567 shadowMin: 0 shadowMax: 99999 shadowWarning: 7 loginShell: /bin/bash uidNumber: 1001 gidNumber: 2015 homeDirectory: /home/ldapuser1 . . . . . . . . 31.Verify the addition of only the group entry: # ldapsearch –x –b dc=example,dc=com cn=dba . . . . . . . . # dba, Group, example.com dn: cn=dba,ou=Group,dc=example,dc=com objectClass: posixGroup objectClass: top cn: dba userPassword:: e2NyeXB0fXg= gidNumber: 2015 . . . . . . . . This completes the setup and local testing of OpenLDAP directory server. ========================================================== 2. OpenLDAP Client Configuration and Testing: ================================================================== This exercise should be done on server1. 1. Install required OpenLDAP client software packages: # yum -y install openldap openldap-clients nss-pam-ldapd 2. Change into the /etc/openldap/cacerts directory and copy server2:/etc/openldap/certs/server2crt.pem over: # cd /etc/openldap/cacerts && scp server2:/etc/openldap/certs/server2crt.pem . 3. Protect the certificate with permissions 0600 and set owner and owing group to ldap:ldap: # chmod 0600 server2crt.pem ; chown ldap:ldap server2crt.pem 4. Configure the client using the authconfig command: # authconfig --enableldap --enableldapauth --ldapserver=ldap://server2.example.com --enableldaptls --ldaploadcacert=file:///etc/openldap/cacerts/server2crt.pem --ldapbasedn="dc=example,dc=com" --update 5. Use the getent command to user and group information from OpenLDAP: # getent passwd ldapuser1 ldapuser1:x:1001:2015:ldapuser1:/home/ldapuser1:/bin/bash # getent group dba dba:*:2015: # ldapsearch –W –D cn=user1,dc=example,dc=com cn=ldapuser1 dn: uid=ldapuser1,ou=People,dc=example,dc=com uid: ldapuser1 cn: ldapuser1 objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount userPassword:: e2NyeXB0fSQ2JEwuTVQubkdRJDYyZ3RHUVgvcHI2dEt6Z0I3MXNiZUlhejhLRXZ JSjIuRkxabUIzVDM1QzhKOS5qdlZIUDREcTJ3T04uVWd1VXNWZnhpVzZCTm56MU1nUnBYd08xVTYv shadowLastChange: 16568 shadowMin: 0 shadowMax: 99999 shadowWarning: 7 loginShell: /bin/bash uidNumber: 1001 gidNumber: 2015 homeDirectory: /home/ldapuser1 # ldapsearch –W –D cn=user1,dc=example,dc=com cn=dba 6. Log in as ldapuser1 and verify account information using the id command: # su - ldapuser1 $ id 7. Exit out of the login session by pressing Ctrl+d at the $ prompt. This completes the remote testing of the OpenLDAP directory server. ================================================== 3. OpenLDAP Client Testing with AutoFS: ===================================================================== Run the following steps on the OpenLDAP server (server2). 1. Install the NFS server utilities: # yum –y install nfs-utils 2. Edit the /etc/exports file and add the following entry to it: # vi /etc/exports /home server1.example.com(rw) 3. Activate the NFS service to autostart at subsequent reboots: # systemctl enable nfs-server 4. Start the NFS server: # systemctl start nfs-server 5. Allow NFS traffic to pass through the firewall: # firewall-cmd --permanent --add-service nfs ; firewall-cmd --reload Now run the following steps on server1: 6. Install required AutoFS software package: # yum -y install autofs 7. Edit /etc/auto.master file and add the following entry to it: # vi /etc/auto.master /home /etc/auto.home 8. Create a file called auto.home in the /etc directory and add the following line to it: # vi /etc/auto.home * -rw server2:/home/& 9. Enable the autofs service to autostart at subsequent system reboots: # systemctl enable autofs 10. Start the autofs service and check its operational status: # systemctl start autofs && systemctl status autofs 11. Try logging in as ldapuser1: # su - ldapuser1 Password: $ id uid=1001(ldapuser1) gid=2015(dba) groups=2015(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 $ df –h . Filesystem Size Used Avail Use% Mounted on server2:/home/ldapuser1 6.7G 3.0G 3.8G 44% /home/ldapuser1 $ pwd /home/ldapuser1 12. Exit out of the login session by pressing Ctrl+d at the $ prompt. ================================================== 1. Configure a Kerberos Server This exercise should be done on server2. This procedure is to configure a Kerberos server for realm EXAMPEL.COM. 1. Ensure server2 has valid entries for itself and server1 in its /etc/hosts file. 2. Ensure that NTP is operational on server2 and server1. 3. Install the Kerberos server packages: # yum –y install krb5-server krb5-libs 4. Ensure the /etc/krb5.conf file contains the following entries for realm EXAMPLE.COM. The first directive sets the default Kerberos realm. The next set of directives defines the hostnames for the KDC and admin servers, and the last set of directives sets the mappings between DNS domains and Kerberos realms. Leave other directives to their default values. [libdefaults] default_realm = EXAMPLE.COM [realms] EXAMPLE.COM = { kdc = server2.example.com admin_server = server2.example.com } [domain_realm] example.com = EXAMPLE.COM .example.com = EXAMPLE.COM 5. Create KDC database for realm EXAMPLE.COM. Specify kdc123 as the database master key and store (-s) it in the .k5.EXAMPLE.COM stash file in the /var/kerberos/krb5kdc directory. # kdb5_util create –s Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.COM', master key name 'K/M@EXAMPLE.COM' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: kdc123 Re-enter KDC database master key to verify: kdc123 6. Set password for the existing kadmin principal as kadmin123 using the cpw subcommand in the kadmin.local shell: # kadmin.local -p kadmin/admin Authenticating as principal kadmin/admin@EXAMPLE.COM with password. kadmin.local: cpw kadmin/admin Enter password for principal "kadmin/admin@EXAMPLE.COM": Re-enter password for principal "kadmin/admin@EXAMPLE.COM": Password for "kadmin/admin@EXAMPLE.COM" changed. 7. While in the kadmin.local shell, add user user1 as principal to KDC and assign password user1kdc (create user1 if it does not exist): kadmin.local: addprinc user1 WARNING: no policy specified for user1@EXAMPLE.COM; defaulting to no policy Enter password for principal "user1@EXAMPLE.COM": user1kdc Re-enter password for principal "user1@EXAMPLE.COM": user1kdc Principal "user1@EXAMPLE.COM" created. 8. While in the kadmin.local shell, list all available principals: kadmin.local: list_principals K/M@EXAMPLE.COM kadmin/admin@EXAMPLE.COM kadmin/changepw@EXAMPLE.COM kadmin/server2.example.com@EXAMPLE.COM krbtgt/EXAMPLE.COM@EXAMPLE.COM user1@EXAMPLE.COM 9. While in the kadmin.local shell, add the Kerberos server as a principal: kadmin.local: addprinc -randkey host/server2.example.com WARNING: no policy specified for host/server2.example.com@EXAMPLE.COM; defaulting to no policy Principal "host/server2.example.com@EXAMPLE.COM" created. 10. While in the kadmin.local shell, add the principal’s keys to the /etc/krb5.keytab file (this is the default name and location of the file): kadmin.local: ktadd host/server2.example.com 11. Quit the kadmin.local shell: kadmin.local: quit 12. Allow Kerberos traffic to pass through the firewall on ports 88 and 749, and load the rules: # firewall-cmd --permanent --add-port 88/tcp --add-port 749/tcp ; firewall-cmd --reload 13. Set the Kerberos server processes to autostart at system reboots: # systemctl enable krb5kdc kadmin 14. Start the Kerberos server processes: # systemctl start krb5kdc kadmin This completes the procedure to configure a Kerberos server. 2. Configure a Client to Authenticate Using Kerberos This exercise should be done on server1. 1. Install the required Kerberos client packages: # yum –y install krb5-workstation krb5-libs pam_krb5 2. Ensure that the /etc/krb5.conf file has the following directives set: dns_lookup_realm = false dns_lookup_kdc = false default_realm = EXAMPLE.COM [realms] EXAMPLE.COM = { kdc = server2.example.com admin_server = server2.example.com } [domain_realm] example.com = EXAMPLE.COM .example.com = EXAMPLE.COM 3. Log in to the Kerberos service as the kadmin principal: # kadmin –p kadmin/admin Authenticating as principal kadmin/admin with password. Password for kadmin/admin@EXAMPLE.COM: 4. Add server1 as a host principal to the KDC database: kadmin: addprinc -randkey host/server1.example.com WARNING: no policy specified for host/server1.example.com@EXAMPLE.COM; defaulting to no policy Principal "host/server1.example.com@EXAMPLE.COM" created. 5. While logged in, extract the server1’s key and store it in the /etc/krb5.keytab file: kadmin: ktadd host/server1.example.com 6. Quit the kadmin.local shell: kadmin: quit 7. Activate the use of Kerberos for authentication: # authconfig --enablekrb5 --update 8. Execute the kinit command to obtain a TGT from the KDC for user1. Enter the password for user1 when prompted. # kinit user1@EXAMPLE.COM Password for user1@EXAMPLE.COM: 9. List the TGT details received in the previous step: # klist Default principal: user1@EXAMPLE.COM Valid starting Expires Service principal 11/01/15 20:58:23 12/01/15 20:58:23 krbtgt/EXAMPLE.COM@EXAMPLE.COM renew until 11/01/15 20:58:23 10. Log in to server2 as user1. You should not be prompted for a password: # ssh user1@server2 Last login: Tue May 19 15:04:09 2015 from server1.example.com $ hostname server2.example.com $ id uid=1000(user1) gid=1000(user1) groups=1000(user1) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 The configuration and testing is complete. user1 is able to log on to server2 without being prompted for a password. |

The Red Hat Certified System Administrator (RHCSA) and the Red Hat Certified Engineer (RHCE) certification exams are performance-based hands-on exams designed for IT professionals. These exams are presented in electronic format on a live desktop computer running Red Hat Enterprise Linux 7. There is one RHEL7-based virtual machine running on the desktop computer for the RHCSA exam and two for the RHCE exam. During the exams, the candidates do not have access to the Internet, or printed or electronic documentation except for what comes standard with RHEL7. The official exam objectives are listed at http://www.redhat.com/training/courses/ex200/examobjective for RHCSA and that for RHCE at http://www.redhat.com/training/courses/ex300/examobjective. Visit the URLs for up-to-date and more in-depth information about the exams. The exam objectives are covered in sufficient detail in the chapters throughout this book. An enumerated list of exam objectives is presented below along with a chapter number where the objective is found.
RHCSA Specific Skills:Understand and Use Essential Tools:
Operate Running Systems
Configure Local Storage
Create and Configure File Systems
Deploy, Configure, and Maintain Systems
Manage Users and Groups
Manage Security
RHCE Specific Skills:System Configuration and Management
Network ServicesNetwork services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:
HTTP/HTTPS (chapter 22)
DNS (chapter 24)
NFS (chapter 20)
SMB (chapter 21)
SMTP (chapter 23)
SSH (chapter 13) [This chapter is in the RHCSA section]
NTP (chapter 16)
Database Services (chapter 25)
|

Linux files are organized logically for ease of administration. This file organization is maintained in hundreds of directories located in larger containers called file systems. Red Hat Enterprise Linux follows the File system Hierarchy Standard (FHS) for file organization, which describes names, locations, and permissions for many file types and directories. File systems are primarily of two types: disk-based and memory-based, and they are used to store permanent and runtime data. Files are static and dynamic, and are referenced using absolute and relative pathnames. Linux supports several different types of files and their type is based on the type of data they store. There are a number of operations that can be performed for managing files and directories. Linux includes thousands of files and each file has certain default attributes that can be viewed or modified. There are tools available that prove to be very helpful in searching for files within a specified boundary and in linking them as desired. Permissions are set on files and directories to restrict their access to authorized users only. Users are grouped into three distinct categories. Each user category is then assigned required permissions. Permissions can be modified using one of two available methods. The user mask may be defined for individual users so that the new files and directories they create always get preset permissions. Every file in Linux has an owner and a group associated with it. The OS offers three additional permission bits to control user access to certain executable files and shared directories. A directory with one of these permission bits set can be used for group collaboration. File System TreeLinux uses the conventional hierarchical directory structure where directories may contain both files and sub-directories. Sub-directories may further hold more files and sub-directories. A sub-directory, also referred to as a child directory, is a directory located under a parent directory. That parent directory is a sub-directory of some other higher-level directory. In other words, the Linux directory structure is similar to an inverted tree where the top of the tree is the root of the directory, and branches and leaves are sub-directories and files, respectively. The root of the directory is represented by the forward slash ( / ) character, and this is the point where the entire file system structure is ultimately connection. The forward slash character is also used as a directory separator in a path such as /etc/rc.d/init.d/network. In this example, the etc sub-directory is located under /, making root the parent of etc (which is a child). rc.d (child) is located under etc (parent), init.d (child) is located under rc.d (parent), and at the very bottom, network (leave) is located under init.d (parent). Each directory has a parent directory and a child directory, with the exception of the root and the lowest level directories. The root directory has no parent and the lowest level sub-directory has no child.
The hierarchical directory structure keeps related information together in a logical fashion. Compare this concept with a file cabinet that has several drawers, with each drawer storing multiple file folders. Two file systems, / and /boot, are created during a default RHEL installation. However, the custom installation procedure covered in Chapter 01 “Installing RHEL7 on Physical Computer Using Local DVD” allows us to create /var, /usr, /tmp, /opt, and /home file systems besides / and /boot. The main directories under the / and other file systems are shown in Figure 3-1. Some of these directories hold static data while others contain dynamic (or variable) information. Static data refers to file contents that are usually not modified, and dynamic or variable data refers to file contents that are modified and updated as required. Static directories normally contain commands, library routines, kernel files, device files, etc., and dynamic directories hold log files, status files, configuration files, temporary files, and so on. A brief description of disk-based and virtual file systems is provided in the following sub-sections. The Root File System (/) – Disk-BasedThe root file system is the top-level file system in the FHS and contains many higher-level directories holding specific information. Some of the key directories are: /etc: The etcetera directory holds system configuration files. Some common sub-directories are: systemd, default, lvm, and skel, which contain configuration files for systemd, defaults for user accounts and some other services, the Logical Volume Manager, and per-user shell startup template files, respectively. /root: This is the default home directory location for the root user. /media: This directory is used by the system to automatically mount removable media such as floppy, CD, DVD, USB, and Zip drives. /mnt: This directory is used to mount a file system temporarily. The Boot File System (/boot) – Disk-BasedThe /boot file system contains the Linux kernel, boot support files, and boot configuration files. The default size of this file system is 500MB, and it may be expanded as part of the preparation to update the kernel. The Variable File System (/var) – Disk-Based/var contains data that frequently changes while the system is operational. Files holding log, status, spool, lock, and other dynamic data are located in this file system. Some common sub-directories under /var are: /var/log: This is the storage for most system log files such as system logs, boot logs, failed user logs, user logs, installation logs, cron logs, mail logs, etc. /var/opt: For additional software installed in /opt, this directory stores log, status, and other variable data files for that software. /var/spool: Directories that hold print jobs, cron jobs, mail messages, and other queued items before being sent out are located here. /var/tmp: Large temporary files or temporary files that need to exist for longer periods of time than what is allowed in /tmp are stored here. These files survive system reboots and are not automatically deleted. The UNIX System Resources File System (/usr) – Disk-BasedThis file system contains general files related to the system, with some portions perhaps shared with other remote systems. This file system is mounted read-only. Some of the important sub-directories under /usr are: /usr/lib: The library directory contains shared library routines required by many commands and programs located in the /usr/bin and /usr/sbin directories, as well as by the kernel and other programs. /usr/bin: The binary directory contains crucial user executable commands. /usr/sbin: Most commands required at system boot are located in this system binary directory as well as most commands requiring root privileges to run. In other words, this directory contains crucial system administration commands that are not intended for execution by regular users (although they can still run a few of them). This directory is not included in the default search path for normal users because of the nature of data it contains. /usr/local: This directory serves as a system administrator repository for storing commands and tools downloaded from the web, developed in-house, or obtained elsewhere. These commands and tools are not generally included with the original Linux distribution. In particular, /usr/local/bin holds executables, and /usr/local/etc contains their configuration files. /usr/include: This directory contains header files for C language. /usr/src: This directory is used to store source code. /usr/share: This is the directory location for man pages, documentation, sample templates, configuration files, etc. that may be shared on multi-vendor Linux platforms with heterogeneous hardware architectures. The Optional File System (/opt) – Disk-BasedThis file system holds additional software installed on the system. A sub-directory is created for each installed software. The Home File System (/home) – Disk-BasedThe /home file system is designed to hold user home directories. Each user account is assigned a home directory in which to save personal files. Each home directory is owned by the user the directory is assigned to, with no access to other users. The Devices File System (/dev) – VirtualThe /dev file system contains device nodes for physical hardware and virtual devices. The Linux kernel communicates with these devices through corresponding device nodes located here. These device nodes are created and deleted by the udevd service as necessary. There are two types of device files: character (or raw) device files and block device files. The kernel accesses devices using either or both types of device files. Character devices are accessed serially, with streams of bits transferred during kernel and device communication. Examples of such devices are serial printers, mice, keyboards, terminals, tape drives, etc. Block devices are accessed in a parallel fashion, with data exchanged in blocks (parallel) during kernel and device communication. Data on block devices is accessed randomly. Examples of block devices are hard disk drives, optical drives, parallel printers, etc. The Process File System (/proc) – VirtualThe /proc file system maintains information about the current state of the running kernel, including details on CPU, memory, disks, partitioning, file systems, networking, and running processes. This virtual file system contains a hierarchy of sub-directories containing thousands of zero-length files pointing to relevant data that is maintained by the kernel in the memory. This virtual directory structure simply provides us with an easy interface to interact with kernel-maintained information. The /proc file system is automatically managed by the system. The contents in /proc are created in memory at boot time, updated during runtime, and destroyed at reboot time. Underneath this file system are stored current hardware configuration and status information. A directory listing of /proc is provided below: # ll /proc dr-xr-xr-x. 8 root root 0 Nov 17 14:22 1 dr-xr-xr-x. 8 root root 0 Nov 17 14:22 10 dr-xr-xr-x. 8 root root 0 Nov 17 14:23 1000 dr-xr-xr-x. 8 root root 0 Nov 17 14:23 1009 . . . . . . . . As mentioned, this file system contains thousands of files and sub-directories. Some sub-directory names are numerical and they point to information about specific processes, with process IDs matching the sub-directory names. Within each sub-directory, there are files and further sub-directories, which include information such as memory segment specific to that particular process. Other files and sub-directories point to configuration data for system components. If you wish to view configuration information for a specific item such as the CPU or memory, you can cat the contents of cpuinfo and meminfo files as shown below: # cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz stepping : 7 cpu MHz : 2423.437 . . . . . . . . # cat /proc/meminfo (also shows available memory) MemTotal: 7889040 kB MemFree: 757800 kB MemAvailable: 1451248 kB . . . . . . . . The data located under /proc is referenced by a number of system utilities, including top, ps, uname, and vmstat, for display purposes. The System File System (/sys) – VirtualInformation about configured hotplug hardware devices is stored and maintained in the /sys file system. This information is referenced for loading kernel modules, creating device nodes in the /dev directory, and configuring each device. This file system is auto-maintained as well. The Temporary File System (/tmp) – VirtualThis file system is a repository for temporary storage. Many programs create temporary files as they run or while they are being installed. The contents of this file system are automatically deleted at system reboots. |

Chapter 1, Pg 14, Exercise 1-1, Step 13: Allocate 40GB to the root partition (instead of 10GB). This will ensure you have enough room available to accommodate server1 and server2 in Chapter 6. Chapter 2, Pg 27: Use user2 (instead of user1) in the first two examples of the ssh command. Chapter 2, Pg 53, Lab 2-8: Use " :r /.bash_profile " to import the contents of the .bash_profile file while in vi. Chapter 3, Pg 78, Exercise 3-2, Step 2: Should say: Add the write permission for the owner, group, and others, and verify". Chapter 3, Pg 81, Exercise 3-3, Step 2: Should say "Change into the home directory of user1 ......". Chapter 3, Pg 82/83: The chmod commands at the bottom of page 82 and beginning of page 83 have a / missing from /usr/bin/su. Chapter 3, Pg 85, Exercise 3-4, Step 5a (after step 5 and before step 6): You need to add write permission to the group and revoke both read and execute from the public. Run: # chmod g+w,o-rx /sdata Chapter 4, Pg 103 under Enclosing within Double Quotes: The back quote is another special character that double quotes do not mask. Try the following: # echo "`hostname`" Chapter 4, Pg 110 under Renicing a Running Process: The PID in the command and the output should be 9191 (and not 1919). Chapter 4, Pg 114, Exercise 4-1, Step 1: Use >& symbols to redirect both output and error to the specified file. Chapter 4, Pg 117, Exercise 4-2, Step 1: Use only the > symbol to redirect only the output to the specified file. Chapter 4, Pg 118: Question 14 should be read as "What are the two commands that we can use to kill a process?" Chapter 4, Pg 119: Answer to Question 23 is False. Chapter 5, Pg 145: The yum command is yum -y install ypbind. Chapter 5, Pg 152, the second paragraph under Adding and Removing Packages references Figure 5-4 (and not Figure 5-8). Chapter 5, Pg 154, Question 3 and its answer. Use the word freshening (and not freshing). Chapter 5, Pg 155, Lab 5-1: Ignore this lab. Chapter 5, Pg 155, Answer to Question 13 should be " . . . . . . will display the package name the specified file belongs to". Chapter 5, Pg 156, Lab 5-3: The reference in this lab is for Lab 5-2 (and not 6-2). Chapter 6: In order for the exercises in this chapter to work without issues, ensure the following: a. Use the MAC address of your computer in the interface configuration file and not the one presented in the book. b. Your physical system is physically connected to your home router and the router is powered up. c. If your network interface is other than em1 or eth0, ensure that the interface file name reflects it. For instance, it would be ifcfg-enp1s0 for interface enp1s0. d. Specify the correct device name (em1, eth0, enp1s0, eno16677354, etc) with the DEVICE directive in the interface configuration file. e. Your physical system and the router are on the same subnet. Chapter 6, Pg 183, Line 20: Specify 8930 as the size of the root logical volume (instead of 9230). Chapter 7, Pg 199, the yum command at the bottom of the page is: yum list installed kernel*. Chapter 7, Pg 205, Exercise 7-2, Step 4: The command is: cat /boot/grub2/grub.cfg. Chapter 7, Pg 212, the second systemctl command example should be: "systemctl -t socket --all". Chapter 7, Pg 220 the first paragraph under "Switching into Specific Targets": The reference is for Table 7-4 (and not Table 7-2). Chapter 7, Pg 229 under "Understanding the Journal" topic, the journal logs are stored temporarily under /var/run/log/journal (or /run/log/journal) directory. Chapter 7, Pg 232, Answer to Question 5 is grub.cfg file. Chapter 7, Pg 233, Answer to Question 16 is "The -U option would instruct the rpm command to upgrade the specified package or install it if it is not already installed. The -i option, on the other hand, would instruct the command to install the package and fail if the package already exists." Chapter 8, Pg 248, Exercise 8-2, Step 1: The gid should be 1010 (and not 1001). Do not specify -g 1010 in the command as this private group will be created automatically. Chapter 8, Pg 252, the command in the third example under Switching Users should be: $ su - user3 (and not user1). Chapter 8, Pg 254, the third line from the top should not contain the % before PKGADM. It should be read as: PKGADM ALL=PKGCMD Chapter 8, Pg 256, Exercise 8-7, Step 2: You need to run the gpasswd command separately on user2new and user3 accounts, so gpasswd -a user2new linuxadm and gpasswd -a user3 linuxadm. Chapter 9, Pg 277, Exercise 9-4: You will need to run the w command within gdisk to write the updated partition table information to the disk before running q to quit this utility. Chapter 9, Pg 280, fourth paragraph under Logical Volume should be read as "Currently, there are three logical volumes on server1 that were created during the installation. Run the . . . . . . . " Chapter 10, Pg 312, Exercise 10-3, Step 6: You need to grep on mntxfs. Chapter 10, Pg 314, Exercise 10-4, Step 6: You need to grep on mntvfat. Chapter 10, Pg 321, Exercise 10-7, Step 7: The shutdown command should be executed without the -y switch. Chapter 10, Pg 323, Exercise 10-8, Step 11: The shutdown command should be executed without the -y switch. Chapter 10, Pg 326, Exercise 10-9, Step 2: The command should be mkdir /autodir. Chapter 10, Pg 331-333, Exercises 10-11 and 10-12: Use vdc2 partition (instead of vdb2). Chapter 11, Pg 345: The yum command at the bottom of the page should have firewall (and not firewalld). Chapter 11, Pg 348 just before The iptables Command topic: it should be systemctl status iptables -land not systemctl status firewalld -l. Chapter 11, Pg 351, Exercise 11-2, Step 3: This rule should be inserted to the OUTPUT chain (and not to the INPUT chain). Chapter 11, Pg 352 just before The firewall-cmd Command topic: the path should be /usr/lib/firewalld/services/ssh.xml and not /usr/lib/systemd/services/ssh.xml. Chapter 11, Pg 354, Exercise 11-3, Step 6: You should not see 443/tcp in the output of firewall-cmd --list-ports, as this rule was added as a runtime rule in step 4 but deleted when firewall-cmd --reload is executed in step 5. If you wish to view the presence of this rule, run firewall-cmd --list-ports after step 4 and before step 5. Chapter 11, Pg 355, Exercise 11-4, Step 2: The list-ports option with the firewall-cmd command has two dash characters. Chapter 11, Pg 365: The second semanage command (middle of the page) uses the -l option (and not --l). Chapter 11, Pg 370, Exercise 11-6, Step 5: The SELinux user should be modified to user_u (and not staff_u). Chapter 12, Pg 403 under Relative Distinguished Name: (See Figure 12-6) should be (See Figure 12-7). Chapter 15, Pg 461, Exercise 15-3, Step 6: It should read "Generate UUIDs for both new interfaces using the uuidgen command:" Chapter 15, Pg 463, Exercise 15-4, Step 1: You need to open virtual console for server2 on host1. Chapter 17, Pg 507, Exercise 17-4. The exercise description references Table 17-5 (and not 16-4). Chapter 19, Pg 541, Exercise 19-1, Step 11: The demo_mode is used for testing purposes only; it is not recommended for use in live environments. Chapter 19, Pg 547, Exercise 19-2, Step 10: Unmount the /iscsidisk1 file system before rebooting the initiator in step 10 for the first time only after the initiator configuration to prevent a system hang. It should automatically remount after the reboot. This manual unmount will not be required before subsequent reboots. Chapter 19, Pg 553, Exercise 19-3, Step 22: Unmount the /iscsifile1 file system before rebooting the initiator in step 22 for the first time only after the initiator configuration to prevent a system hang. It should automatically remount after the reboot from the fstab file. This manual unmount will not be required before subsequent reboots. Chapter 20, Pg 562, Exercise 20-1, Step 2: Run chmod +w /common to ensure this directory is writable. Chapter 20, Pg 567, Exercise 20-3, Step 4: The step description should say nfssdatagrp and not sdatagrp. Chapter 21, Pg 583 under SELinux Requirements for Samba Operation: Execute systemctl start smbd before running the ps command to view SELinux context on the smbd process. This is only required if smbd is not already running. Chapter 22, Pg 614, Exercise 22-5, Step 1: You also need to set the Listen directive in the beginning of the httpd.conf file to listen on port 8989. Define it as Listen 8989. Chapter 23, Pg 630, under Common Terms, the term "Mail Submission Agent" should be abbreviated as MSA. Chapter 25, Pg 676, Table 25-1: The name of the third package is mariadb-libs and not Mariadb-libs. Chapter 25, Pg 683, Second Paragraph: ....... semicolon (;) at the end. I forgot to mention the www.certdepot.net website in the bibliography of my book. I regularly checked this website when writing my book. In addition, there are hundreds of other useful websites, other than the ones I had listed in the bibliography of the book, that I had visited in order to get a general sense about what others think and how they do things to provide my readers with more refined information. |