Linux and sysadmin posts

Installing a PXE server using dnsmasq

Written by Robert -

This installation is based on Debian 9.

This manual requires dual network cards. One for accessing the storage where the pxe files are stored, the other for outputting DHCP, TFTP and FTP, which is required for our setup of PXE.

Debian installation

Install Debian as usual.

systemctl stop systemd-timesyncd
systemctl disable systemd-timesyncd

Network configuration

We need to setup a static IP for the network card that is used as the DHCP server.

vim /etc/network/interfaces

Add the following configuration:

allow-hotplug enp2s0
iface enp2s0 inet static

Change the network settings to whatever works in your environment.


TFTP is handled by dnsmasq. The root directory of the TFTP part is specified in the configuration below. We do need to create the right directory:

mkdir /export
chmod 755 /export
echo user_allow_other >> /etc/fuse.conf
chown root:fuse /etc/fuse.conf
chmod 640 /etc/fuse.conf



apt-get install dnsmasq


Backup the original documents:

cd /etc/
cp dnsmasq.conf dnsmasq.conf.original
cp -r dnsmasq.d dnsmasq.d.original

Open the dnsmasq.conf file:

vim dnsmasq.conf

Add the following lines, adjust these where needed:

except-interface=eno1 # this excludes that specific interface
listen-address= # your IP address
port=0 # disables the dns features
user=root # which user the service runs at
group=root # which group the service runs at
log-facility=/var/log/dnsmasq.log # i want logs
log-queries # for debugging
dhcp-range=enp2s0,, 0,,1h # on interface, change accordingly, with range from - to, netmask and lease time
dhcp-option=3, # specifies the gateway
dhcp-boot=pxelinux.0 # filename for pxe boot
enable-tftp # we need tftp for pxe boot
tftp-root=/export/ # where the tftp files are


The way our setup works is that PXE boots directly from TFTP, but configurations are pulled from an FTP server. Because of this, we need to setup an anonymous FTP server. This way, we don't need authentication to pull configuration files from the server.


apt-get install proftpd-basic


First we make a backup of the standard configuration:

cd /etc
cp -r proftpd proftpd.original

Let's setup the anonymous settings:

vim /etc/proftpd/conf.d/anon.conf

Add the following lines:

<Anonymous ~ftpuser>
# default ftp user
User ftp
# default ftp user group
Group ftp
# alias anonymous as ftp user
UserAlias anonymous ftp
# all files belong to ftp
DirFakeUser on ftp
DirFakeGroup on ftp
# Don't require a shell, ftp user doesn't need one
RequireValidShell off
# Finetune clients yourself, you can add more
Maxclients 10
# No writing
<Directory *>
<Limit WRITE>

Setup global settings:

vim /etc/proftpd/conf.d/custom.conf

Add the following lines:

# Users don't require valid shell accounts
RequireValidShell off
I don't require IPv6
UseIPv6 off
DefaultRoot ~ ftpuser
# limit login to the ftpuser group
<Limit LOGIN>
DenyGroup !ftpuser

Setup TLS settings to prevent authentication issues with the FTP client:

vim /etc/proftpd/conf.d/tls.conf

Add the following lines:

<IfModule mod_tls.c>
TLSEngine on
TLSLog /var/log/proftpd/tls.log
TLSProtocol TLSv1.2
TLSRSACertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
TLSRSACertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
TLSVerifyClient off
TLSRequired off

Auto mounting sshfs

On new PXE server:

useradd -m -s /bin/false autossh su - autossh -s /bin/bash ssh-keygen

Add the the following line to the beginning of the


There should be a space between that line and ssh-rsa

On the server that shares the SSH directory:

useradd -m sshfs

Login as the sshfs user:

su - sshfs -s /bin/bash
cat .ssh/ > .ssh/authorized_keys
chown sshfs:sshfs /home/sshfs -R
chmod o-rwx /home/sshfs -R
chmod go-w /home/sshfs
chmod 700 /home/sshfs/.ssh
chmod 644 /home/sshfs/.ssh/authorized_keys
chown sshfs:sshfs /home/sshfs/.ssh/authorized_keys
chown sshfs:sshfs /home/sshfs/.ssh
service sshd restart

Mount the SSH directory:

sshfs -p 21112 sshfs@ /export -o allow_other

The folder should now be mounted. Now we need to mount this automatically.

Before we do this, we need to unmount it. exit out of the user, and unmount the folder:

umount /export

Because I want this mounted as a user, I'm going to use the crontab to mount it. Due to an issue with crontab under Debian, I'm going to let it wait a minute before mounting the folder.

As a root user, switch to the autossh user:

su - autossh -s /bin/bash

Add the following lines to

sshfs -p 21112 sshfs@ /export -o allow_other

Now make the file executable:

chmod +x

Let's add it to the crontab:

crontab -e

Add the following line:

@reboot sleep 60 && /home/autossh/ 2>&1 >> /home/autossh/mount.log

After a reboot, the folder should be automatically mounted.


We need to install the software first:

apt-get install nfs-common
apt-get install nfs-kernel-server

Now we can set it up.

vim /etc/exports

Add the following line:


Let's restart the service:

systemctl restart nfs-server.service

From another machine, we can now check if the export directory is visible:

root@computer:/etc# showmount -e
Export list for

Systemd runlevels

Written by Robert -

Most Linux users know about runlevels. I've written about those before. If you require to use systemd, you can still use runlevels. Luckily it's easy to understand:

List of runlevels

| run level | systemd target |

| 0 | |

| 1 | |

| 3 | |

| 5 | |

| 6 | |

| emergency | |

Current runlevel

You can find out the current runlevel by using the following command:

systemctl get-default

Changing the runlevel

First let's focus on temporary changes. We can change the current runlevel by using the isolate option. Let's make an example. Here To change a runlevel, for example to the commonly used runlevel 3, you can use systemctl:

systemctl isolate

This can also be used to poweroff the computer:

systemctl isolate

To change this permanently, you need to change the link of to the new target. Here I'm going to change it to runlevel 3 or

ln -sf /usr/lib/systemd/system/ /etc/systemd/system/

That's how you change it.

System backup with mondo restore

Written by Robert -

It's always a good idea to have a systembackup. Normal backups are fine, but being able to recover an entire system is very useful.

For this situation I'm going to use mondo rescue, to provide me with a recoverable system. The advantage of this software is that it creates a bootable iso file that you can use to restore the system with. This can be burned on CD's, DVD's or just regular USB drives.


In this case I'm using this on CentOS 6. I need to add the right sources for it to work.

cd /etc/yum.repos.d/  

Now let's update yum:

yum check-update

After this, we should be able to install mondo from the repository:

yum --nogpg install mondo

The reason we use the nogpg option, is because there is a small mistake in the repo where it shows the wrong key.


Adding a place where it can place the backups

I prefer to use sshfs for this purpose, since the storage I'm going to put the backups on has proper ssh support. To enable it on CentOS6, we need to enable fuse:

modprobe fuse

Before we can install sshfs, we need to enable the epel-release:

yum install epel-release

After that, we can install sshfs:

yum install sshfs

Now that's installed, we can create a directory where we can mount the sshfs share on:

mkdir /mnt/backup

Let's mount the share. In my case, the IP I'm putting the backups on is and the user is root (it's just a local test). The directory on the computer that makes the backup is /mnt/backup and the device with the storage has the disk where I wish to place the backup on, mounted to /mnt/sda:

sshfs root@ /mnt/sda

Now the folder is mounted, you can verify that you can make and remove files on the share. If that all works, we can create a backup

Backing up the system

Mondo creates several different ways to make a backup. Because I'm using an sshfs mount, I will be using Hard Disk as the option:


Now we can select the location of the backups. This is where the bootable ISO images will be located:


The type of compression is personal. For me, the default option is good.


Maximum compression can take time, but it saves disk space.


The size is also something you need to define yourself. You can use the stock 4480 or you can change the size of it to something else. In this case I'm going to change it to 14000 but you can easily make it larger if you are going to use an external disk.


Now we can change the name to something you can recognize. You can use the server name, the ip address or something else that you can later recognize.


In most cases, it's best to use the / as the directory you wish to backup. There are exceptions, but changing this will result in a not bootable iso.


Here you can select all the folders you don't wish to backup. Be careful with this.


The temporary folder has to be located on the current system. This can't be in the sshfs folder that we mounted before.


You can put the scratch location on the sshfs share, like I'm doing here.


Extended attributes can be useful, especially with SElinux. This will take some extra time.


Your current kernel.


I'm a big fan of verification, especially with backups.


After pressing Yes, the backup will start.



When it's done, you should end up with a nice iso image that contains the data.


Restoring is pretty easy. I'm using virtualbox for testing purposes in which I can boot directly of ISO. In other cases you might have to burn this to a DVD, put it on a harddisk or usb stick, and boot from there. After booting the iso (use the first iso first), you get to see the menu.


It's pretty clear what the options are. I'm running this in a fresh virtual machine, so I'm going to just nuke it.


After it's done you get a screen similar to this:


Type exit to reboot and you should get a screen similar to this:


Your restore is complete, your device is back up and running. Just check your logs for errors and use the machine.


Now we know what options we want, we can use the options to create a nice way to do this without all the menu's.

First we are going to take the settings above and run this as a command.

You can find the options here:

mondoarchive -O -V -p centos-mondo-backup$(date '+%d-%b-%Y') -i -E "/mnt" -I "/" -N -d '/mnt/backup' -s '14000m' -9 -G -S '/mnt/backup' -T '/tmp' -z   
#-O = create backup  
#-V = verify  
#-p = prefix name  
#-i = use ISO images  
#-E = directories to exclude  
#-I = path to backup  
#-N = exclude network directories  
#-d = backup device  
#-s = Maximum size  
#-9 = compression level 9, maximum compression  
#-G = use gzip because it's faster  
#-S = path of the scratch directory  
#-T = path of the temp directory  
#-z = save extended attributes  

Putting it all together

We can now make a simple script to create a full system backup

# Mount sshfs (this requires the use of ssh keys)  
sshfs root@ /mnt/sda  
# Create the backup  
mondoarchive -O -V -p centos-mondo-backup$(date '+%d-%b-%Y') -i -E "/mnt" -I "/" -N -d '/mnt/backup' -s '14000m' -9 -G -S '/mnt/backup' -T '/tmp' -z   
#-O = create backup  
#-V = verify  
#-p = prefix name  
#-i = use ISO images  
#-E = directories to exclude  
#-I = path to backup  
#-N = exclude network directories  
#-d = backup device  
#-s = Maximum size  
#-9 = compression level 9, maximum compression  
#-G = use gzip because it's faster  
#-S = path of the scratch directory  
#-T = path of the temp directory  
#-z = save extended attributes  
# Unmount share  
umount /mnt/backup  
# Send log file to mail address  
mail -s 'Mondo full backup' < /var/log/mondoarchive.log  
# Done!  

Now add this line to your crontab and you have automated a full backup of your device.

Remove logs older than x days

Written by Robert -

Logrotate can be a handy tool, but sometimes, a simple bash script can be easier.
In my situation, samba writes log files to /var/log/samba. This only creates small files that I don't need for a long time. I'm going to create a script to remove the log files that are older than 30 days.

First I need to create a file (it only likes uppercase, lowercase, digits, underscores and dashes, so would not be executed):

vim /etc/cron.daily/removelogs

Now we can create a simple script

# Remove files older than 30 days.
find /var/log/samba/* -maxdepth 1 -mtime +30 -maxdepth 1 -exec rm {} \; > /var/log/logsdelete.log
# find: find files in location
# maxdepth: how far it should search. 1 is only top directory it's searching in. leave this out to 
# search everything below the folder.
# mtime: only display files older than +n days
# exec: execute rm on output
# >: write log file to .log

The comments in this script should make sense. You can easily change the time or location, or add more rules to different folders where you need the same thing done.

Now that the script is in place, we can save the script and make it executable.

chmod +x /etc/cron.daily/removelogs

Generating SSH keys

Written by Robert -

Here we are going to generate 4096 bit keys to use with SSH.

To generate SSH keys on your desktop, use the following command:

I assume that you have ssh-keygen installed. If that's not the case, consult the manual of your distribution.

ssh-keygen -t rsa -b 4096


This is the location for your private key. The default should work for most users.


Enter a difficult password, something that you don't use for anything else.


Repeat the password.


Here you can see that it successfully created both the public (~/.ssh/ and the private (~/.ssh/id_rsa) keys. The randomart is not something you usually need.

Always keep your private key private.

Create OpenSSH Server

Written by Robert -

User your distro's package manager to install OpenSSH.

In this case I'm using Arch, so the command is sudo pacman -S openssh.



Open the file /etc/ssh/sshd_config using your favorite text editor. Add a line called AllowUsers and add a username behind it. Also uncomment the Port line. You can change the default port if you wish. This is especially useful when having a machine that has it's SSH port directly on the internet, like in the case of a VPS. If you use this, or don't trust the network where the machine is attached to, then make sure you take more security measures.

In my case I added these two lines:
AllowUsers robert
Port 22


Start the server:

First we start the SSH server:
sudo systemctl start sshd.socket

Now we can enable it so it starts automatically at boot:
sudo systemctl enable sshd.socket

Connecting to the server

From the client you can now use the following command to connect to the server:

If you have the same username on the server as you have on the client:
ssh ip_address

If you don't have the same username on the client as you have on the server, you need to specify this:
ssh username@ip_address

After accepting the fingerprint, you should now be connected:


Network stresstest

Written by Robert -

Ever wanted to see how a switch performs when running it 24 hours? Here is how I did it:

Grab two boxes running Linux. Install the tool iperf on both systems using your favorite package manager. The program needs two systems to function properly. One acts as a server, the other as a client. I prefer to run the test in a tmux session, for two reasons. The first is that I only need one terminal sessions to run two commands. The second reason is that if someone accidently closes the terminal, tmux and the commands are still being run in the background.

Server side

On the server side, you need to run the following command:

iperf -s -i 10

The -s stands for server, and -i 10 is the interval when the screen refreshes, in this case every 10 seconds.

Personally, I prefer to run it as following:

iperf -s -i 10 > /home/username/iperf.txt

That way, all information is send to a text file. the text is now invisble,


Here is the output that the command gives. It's not a lot. So I split the tmux window in two, and on the other window I run the following command:

tail -f /home/manjaro/iperf.txt

This reads the file and all new information put in the file to your screen. Like the cat command, but it shows new information as well.


After running that command, you can see all the information.

client side

On a different computer, we start tmux again so we can start the client.

The client side works a bit different. The server only listens to the client. The client starts the actual test.

What we do is run the following command:

iperf -c <ipofserver> -i <interval> -t <time> > /home/username/iperf.txt rs2321

This starts the data transfer. The time is in seconds. 24 hours x 60 minutes x 60 seconds = 86400 seconds in a day. Because this test is running for 24 hours, this is the amount of seconds it will run. It will stop automatically after the time has passed. I again output the data to a text file.

I split the window in two again, and I start the tail of the text file. rs2321

The images of the output are from two different tests. The data on both sides should be exactly the same so it shouldn't matter on which screen you are reading the data.


Written by Robert -

Logwatch is a very nice tool to get periodic information of your system. It can generate a nice output with information like commands ran, kernel errors, smb access, ssh logins etc.

For this to work, you need to use the installation commands of your distro to install logwatch.

After installation, you can alter the configuration using your favorite editor:

vim /usr/share/logwatch/default.conf/logwatch.conf

Here is an output of a simplified logwatch.conf:

# Default Log Directory
# All log-files are assumed to be given relative to this directory.
LogDir = /var/log

# You can override the default temp directory (/tmp) here
TmpDir = /var/cache/logwatch

# Default person to mail reports to.  
MailTo =
# Default person to mail reports from.  
MailFrom = Logwatch_servername

# If set to 'Yes', the report will be sent to stdout instead of being
# mailed to above person.
Print =

# The default time range for the report...
# The current choices are All, Today, Yesterday
Range = yesterday

# The default detail level for the report.
# This can either be Low, Med, High or a number.
# Low = 0
# Med = 5
# High = 10
Detail = Med

# The 'Service' option expects either the name of a filter
# (in /usr/share/logwatch/scripts/services/*) or 'All'.
# The default service(s) to report on.  This should be left as All for
# most people.
Service = All
# You can also disable certain services (when specifying all)
Service = "-zz-network"     # Prevents execution of zz-network service, which
                            # prints useful network configuration info.
Service = "-zz-sys"         # Prevents execution of zz-sys service, which
                            # prints useful system configuration info.
Service = "-eximstats"      # Prevents execution of eximstats service, which
                            # is a wrapper for the eximstats program.

# By default we assume that all Unix systems have sendmail or a sendmail-like system.
# The mailer code Prints a header with To: From: and Subject:.
mailer = "sendmail -t"

After setting this up, you will get daily e-mails from logwatch. Here is an example of a logwatch generated e-mail using the config above.

 ################### Logwatch 7.3.6 (05/19/07) #################### 
        Processing Initiated: Sat Mar  3 07:40:03 2018
        Date Range Processed: yesterday
                              ( 2018-Mar-02 )
                              Period is day.
      Detail Level of Output: 5
              Type of Output: unformatted
           Logfiles for Host: servername
 --------------------- Cron Begin ------------------------ 
 Commands Run:
    User root:
       /usr/local/bin/backup: 1 Time(s)
 ---------------------- Cron End ------------------------- 
 --------------------- Kernel Begin ------------------------ 
 ---------------------- Kernel End ------------------------- 
 --------------------- pam_unix Begin ------------------------ 
    Sessions Opened:
       robert: 3 Time(s)
 ---------------------- pam_unix End ------------------------- 
 --------------------- samba Begin ------------------------ 
 Opened Sessions:
    Service data as user:
       asdf       from host 1234 (  : 
                1 Time(s)
 ---------------------- samba End ------------------------- 
 --------------------- SSHD Begin ------------------------ 
 Users logging in through sshd:
    robert: 2 times
 ---------------------- SSHD End ------------------------- 
 --------------------- Sudo (secure-log) Begin ------------------------ 
 robert => root
 /bin/bash - 3 Times.
 ---------------------- Sudo (secure-log) End ------------------------- 
 --------------------- Disk Space Begin ------------------------ 
 Filesystem      Size  Used Avail Use% Mounted on
 /dev/sda3       530G   13G  491G   3% /
 /dev/sda1       976M  145M  781M  16% /boot
 ---------------------- Disk Space End ------------------------- 
 ###################### Logwatch End ######################### 

Keeping your Gentoo box up to date

Written by Robert -

Here is a short description of keeping a Gentoo box up to date. There are many ways to do this, but this way is mine. This might not be the best way to do it on your system.

Without options, the output of emerge can be a bit overwelming for new users, as it shows the amount of packages. It's by default not optimised. But it's Gentoo, you are supposed to configure it yourself.

First sync emerge to ensure emerge has all the latest information:
emerge --sync

Review the configuration file changes:

Now we can start the upgrade of packages:
emerge -uDN @world --changed-use --tree --unordered-display --complete-graph --with-bdeps=y -j 8 --load-average 8 --quiet

The u is the update switch, the D is for deep, the N is for newuse (checks for use flag changes).
Includes packages where use flags have changed since install.
Makes it a bit more readable
Makes sure it doesn't break dependencies.
Amount of packages it can build at the same time. The handbook suggests cores or threads +1. If you have a multithreading CPU, use threads, if not, use cores. If you have a 2 core, 4 thread machine, the general advise is -j 5. Some users advise to just use the total ammount of threads or cores, so in that case, -j 4 would be best.
I don't use this with arguments, so it removes a previous load limit.
Reduces the output.

Now that's done, we can remove packages that aren't required anymore:
emerge -av --depclean

Let's review the configuration file changes again. It doesn't really need to be done twice, but I prefer it. dispatch-conf

Verify the Gentoo security advisory for all packages that are installed.
glsa-check -f all

Setting up logrotate

Written by Robert -

Assuming you have logrotate installed, it's very easy to set it up. Setting up logrotate has to be done for each file.

In this example, I set it up for files called dnsmasq.log, which, in my case, handles DHCP. This creates rather large log files. Since text files can easily be compressed, I advice to do so since it saves a lot of harddisk space. The config file should be placed under /etc/logrotate.d

The system I'm configuring this on is running sysvinit but it should work similar in systemd.

# let's start with the name of the log that needs to be rotated
/var/log/dnsmasq.log {
# If there isn't a file with that name, continue to the next file with missingok
# Don't rotate if the file is empty
# Compress the file using gzip
# Delaying is usefull if the file is in use while trying to rotate it. It will rotate it the next time it's needed
# What is the minimum filesize needed before a logrotate takes place?
size 1M
# create a new file. Noticed I only gave root rights to this file. No need to execute so 600 should be enough.
create 0600 root root

For this to work, you need to have a log file there that's bigger than 1MB in size. When that's the case, you can restart the logrotate service by typing the following commands:

Sysvinit and OpenRC:
service logrotate restart

systemctl restart logrotate

Using Gmail for sending smtp mail

Written by Robert -

Before doing any of this, make sure you allow unsafe applications to use your gmail account. I cannot advise you doing this with your normal e-mail account. You should only do this with an account dedicated for this purpose, because of the dangers of having SMTP open.

After being logged in, you can change this by going here:

I assume you have the ssmtp package installed on your system. You need to edit the file called /etc/ssmtp/ssmtp.conf

# The user that gets all the mails  
# The mail server  
# The address where the mail appears to come from for user authentication.  
# The full hostname. You need to use a FQDN or localhost  
# Use TLS before starting negotiation  
# Google requires the use of starttls  
# Your e-mail address as your username  
# Your password as your password  
# Email 'From header's can override the default domain?  
# Your certificate bundle  

Please verify that your bundle is in that location.

A note on passwords with ssmtp

There has been a bug for many years regarding special characters in ssmtp passwords. Especially with passwords like 123#456, where a pound sign is added, it won't parse this properly towards the server. I personally advice to use longer passwords, like DKDbxnx2fy9zTQEdgxxUqBLm instead. There are several password generators online that can assist you with creating a password like this.

Clear jobs in iDrac7

Written by Robert -

This shouldn't be done unless you really really need to. I had this job that was stuck for hours and was still there after several reboots. If possible, avoid doing this.

Log in to the iDrac7 controller using Your favorite SSH tool.
Open the racadm tool by typing racadm
To view the list of jobs, type jobqueue view

-------------------------JOB QUEUE------------------------
[Job ID=JID_835543100943]
Job Name=TSR_Collect
Status=Completed with Errors
Start Time=[Not Applicable]
Expiration Time=[Not Applicable]
Message=[SYS165: Job completed with errors.]
Percent Complete=[100]

Now you can see the ID of the job that's stuck in the queue. For some reason, it's not possible to clear this properly using the iDrac website.

To delete the job, use the following command:

jobqueue delete -i JID_835543100943

You can see the result here:

racadm jobqueue delete -i JID_835543100943
RAC1032: JID_835543100943 job(s) was cancelled by the user.

What ports are in use

Written by Robert -

I'm running an application and I need to make sure that application isn't using an already used port on my system.

For this reason, there are several tools available that you can use, for example: lsof and nmap spring to mind.

lsof is an old Unix-like tool that lists open files (hey, it's not called "LiSt Open Files" for nothing right?) Doing a command like lsof -i -P -n but adding | grep LISTEN will show you the ports it's listening on. On my box it's displaying the following:

root@server:~# lsof -i -P -n | grep LISTEN
mysqld     1086    mysql   18u  IPv4    12899      0t0  TCP (LISTEN)
sendmail-  1091     root    4u  IPv4    12532      0t0  TCP (LISTEN)
sendmail-  1091     root    5u  IPv4    12533      0t0  TCP (LISTEN)
sshd       1169     root    3u  IPv4    13367      0t0  TCP *:21 (LISTEN)
sshd       1169     root    4u  IPv6    13369      0t0  TCP *:21 (LISTEN)
apache2    1774 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2    1774 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2    7567 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2    7567 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2    7606 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2    7606 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   24538     root    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   24538     root    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   29647 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   29647 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   29648 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   29648 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   30161 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   30161 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   30163 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   30163 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   30164 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   30164 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   30165 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   30165 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)
apache2   30405 www-data    4u  IPv6   382274      0t0  TCP *:80 (LISTEN)
apache2   30405 www-data    6u  IPv6   382278      0t0  TCP *:443 (LISTEN)

Ok so I know my machine is listening on port 80 and 443.
Now, this generates a nice list, but doesn't show me what it sees from the outside. To do that, we use the wonderful application called nmap. Keep in mind this only works properly when done remotely. Just because of that, I will show you the internal and external output of nmap:
Internal on the server itself:

root@server:~# nmap localhost

Starting Nmap 6.47 ( ) at 2017-01-02 20:01 UTC
Nmap scan report for localhost (
Host is up (0.0000030s latency).
Other addresses for localhost (not scanned):
Not shown: 994 closed ports
21/tcp   open  ftp
25/tcp   open  smtp
80/tcp   open  http
443/tcp  open  https
587/tcp  open  submission
3306/tcp open  mysql

Nmap done: 1 IP address (1 host up) scanned in 1.56 seconds
You have new mail in /var/mail/root

External from my own machine:

[robert@arch ~]$ nmap 404.404.404.404

Starting Nmap 7.31 ( ) at 2017-01-02 21:01 CET
Nmap scan report for 404.404.404.404
Host is up (0.12s latency).
Not shown: 997 filtered ports
21/tcp  open  ftp
80/tcp  open  http
443/tcp open  https

Nmap done: 1 IP address (1 host up) scanned in 9.54 seconds

Restarting a crashed ESX vCenter host

Written by Robert -

If you have a vCenter host inside a cluster that crashed for whatever reason, a normal reboot of the host will cause you to lose log files. If you can, it's better to start an SSH session into the host and run the following commands:

/etc/init.d/hostd restart

Hostd is the agent of the ESX server. Vpxa passes the information to the hostd and the hostd passes the information to the ESX server.

/etc/init.d/vpxa restart

Vpxa is the vCenter server agent. This is the part that is managed by the vCenter server. Any changes you make in your cluster go over this service (modifying a VM for example).

After restarting both services, it's very likely that the server will be up and running again. If not, other measures can be taken accordingly.

Apache2 and https

Written by Robert -

I renewed the backend of the blog and I went for letsencrypt for my SSL certificate. The process on Ubuntu based servers is fairly straightforward.

Installing the SSL tool:

When logged in over SSH, you can download the file from the eff to install the certification:

sudo wget -P /usr/local/sbin

After that you need to make the script writeable:

sudo chmod a+x /usr/local/sbin/certbot-auto

This tool can save you quite some time as it requests the certificate and installs it for you automatically. Because this blog runs on Apache2, the command is very simple:

certbot-auto --apache -d -d

If you wish, you can install it for multiple subdomains and other domains that you are running on a single host.

After that, the website is active on https.

Automatically renew SSL certificate:

LetsEncrypt doesn't have long lasting SSL certificates. That's not a problem because the certbot tool can easily handle SSL certificates renewals.

If you open the crontab using the 'crontab -e' command, you can enter this line:

0 1 * * 7 /usr/local/sbin/certbot-auto renew >> /var/log/certbot-auto-renew.log

That will automatically renew the certificate every sunday at 01:00 AM.

The result will look something like this:

Processing /etc/letsencrypt/renewal/

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/ (skipped)
No renewals were attempted.

Forcing https:

Using https is nice, but when users aren't automatically connecting to it, they are still going to the unsecure site.

If you are using a fairly default Apache2 configuration, the site will be listed as a VirtualHost in the /etc/apache2/sites-enabled/000-default.conf

Under the used VirtualHost, you can comment out the following line:

DocumentRoot /var/www/html

After that line you can enter a new line that forces it to https:

Redirect permanent /

After restarting Apache, users are automatically forwarded to the new site.

Automatically start service that crashed

Written by Robert -

There is a problem with a service that gets closed at random intervals. This service needs to run. The application itself is closed by the oom killer service that closes the application. I'm unable to find the cause of the issue at this moment, but it appears that the system is running out of memory. To figure out why this is happening, I created a script. This script runs every minute and checks if there is a PID running for perl. If that is not the case, it start's the service and sends the results of the free -m (free memory in MegaBytes) with the time and date to a log file.

pidof /usr/bin/perl >/dev/null
# is there an instance of /usr/bin/perl running? Send the data to /dev/null

if [[ $? -ne 0 ]] ; then
        # If there isn't anything running, do the following:
        /etc/init.d/servicename start > /dev/null
        # Starts the servicename service
        echo "--------------" >> /var/log/servicename.txt
        echo "--------------" >> /var/log/servicename.txt
        echo "--------------" >> /var/log/servicename.txt
        # Creates three lines with - symbols to easily see the different blocks
        echo "Restart servicename Protocol: $(date)" >> /var/log/servicename.txt
        # Adds a line with the current date and time
        free -m >> /var/log/servicename.txt
        # Outputs the current
        tail -n 20 /var/log/messages >> /var/log/servicename.txt
        # Outputs the last 20 lines of the /var/log/messages to the servicename.txt log file. 
    # You can increase this number to get more lines in the log file. 
        echo "servicename was working fine: $(date)" >> /var/log/servicenameisfine.txt

Keep in mind that this is not a solution to the problem but a way to minimize impact while searching for a possible problem in the custom written application that keeps crashing.
Here is an example of the output of the script, when the service isn't running:

Restart servicename Protocol: Thu Jun  2 15:52:01 CEST 2016
             total       used       free     shared    buffers     cached
Mem:          8002       6569       1432          0        367       3729
-/+ buffers/cache:       2472       5529
Swap:        10031         13      10018
Jun  2 15:37:01 web /usr/sbin/cron[26211]: (root) CMD (/root/
Jun  2 15:38:01 web /usr/sbin/cron[26244]: (root) CMD (/root/
Jun  2 15:39:01 web /usr/sbin/cron[26287]: (root) CMD (/root/
Jun  2 15:40:01 web /usr/sbin/cron[26311]: (root) CMD (/root/
Jun  2 15:41:01 web /usr/sbin/cron[26384]: (root) CMD (/root/
Jun  2 15:42:01 web /usr/sbin/cron[26412]: (root) CMD (/root/
Jun  2 15:43:01 web /usr/sbin/cron[26435]: (root) CMD (/root/
Jun  2 15:44:01 web /usr/sbin/cron[26490]: (root) CMD (/root/
Jun  2 15:45:01 web /usr/sbin/cron[26575]: (root) CMD (/root/
Jun  2 15:46:01 web /usr/sbin/cron[26636]: (root) CMD (/root/
Jun  2 15:47:01 web /usr/sbin/cron[26691]: (root) CMD (/root/
Jun  2 15:48:01 web /usr/sbin/cron[26748]: (root) CMD (/root/
Jun  2 15:49:01 web /usr/sbin/cron[26800]: (root) CMD (/root/
Jun  2 15:50:01 web /usr/sbin/cron[26836]: (root) CMD (/root/
Jun  2 15:51:01 web /usr/sbin/cron[26928]: (root) CMD (/root/
Jun  2 15:52:01 web /usr/sbin/cron[26970]: (root) CMD (/root/

Here is an example of the output of when the service is properly running:

servicename was working fine: Thu Jun  2 15:51:01 CEST 2016
servicename was working fine: Thu Jun  2 15:56:01 CEST 2016
servicename was working fine: Thu Jun  2 16:01:01 CEST 2016

After creating the script, make sure it's executable by using the chmod +x command. After making it executable you can add it to the crontab by invoking the crontab -e command. In this case I wanted to run this every 5 minutes.

*/5 * * * * /root/

Depending on the distribution, the cron service might have to be restarted. Because the server this script runs on is using SuSe Enterprise Server, the service cron has to be restarted by invoking the command service cron restart

SuSe server upgrade from SP3 to SP4

Written by Robert -

At work, to be able to get the latest security patches, I had to upgrade a few servers from SuSe 11 SP3 to SP4. This is not a very complicated procedure.

First, you want to update the current system so the latest patches are installed.

zypper ref -s zypper update -t patch zypper update -t patch

After that, we need to make sure the migration tools are available and the right repo's are added:

zypper se -t product | grep -h -- "-migration" | cut -d'|' -f2

After this is done, we can now install the migration tool

zypper in -t product SUSE_SLES-SP4-migration

After installation of the tool, we need to make sure the server is registered. This requires a paid license.

suse_register -d 2 -L /root/.suse_register.log

Now we can refresh the repositories.

zypper ref -s

Just to be sure the SP4 repositories are enabled we enable them again:

zypper modifyrepo --enable SLES11-SP4-Pool zypper modifyrepo --enable SLES11-SP4-Updates

Let's start the upgrade

zypper dup --from SLES11-SP4-Pool --from SLES11-SP4-Updates

After the upgrade, we need to make sure all the latest patches are updated.

zypper update -t patch

Reregister the now SP4 server at SuSe

suse_register -d 2 -L /root/.suse_register.log

Now you can reboot the server, check the log files for errors, make sure the system and the software running on it are fully working and you are done.

You can also do it in a script (and yes, I prefer to use pauses in there)

I don't recommend to do this with a script. This is just to show it's possible. If you have large amount of servers requiring an update, all backups have been checked and working properly and if you are aware of the risks, it's possible to do it this way.

# Update and registering

read -p "Let's start with updating zypper. Press enter"
zypper ref -s
zypper update -t patch
zypper update -t patch
zypper se -t product | grep -h -- "-migration" | cut -d'|' -f2
read -p "Did that return SP4? If so, let's continue. Press enter"

zypper in -t product SUSE_SLES-SP4-migration

read -p "Now it's going to register. Press enter"
suse_register -d 2 -L /root/.suse_register.log

# Upgrade the system to SP4
read -p "Let's refresh zypper and make sure the SP4 pool and update repo's are enabled. Press enter"
zypper ref -s
zypper modifyrepo --enable SLES11-SP4-Pool --enable SLES11-SP4-Updates

read -p "Ready for the upgrade? press enter"
zypper dup --from SLES11-SP4-Pool --from SLES11-SP4-Updates

# Upgrade with the latest patches

read -p "Updates done, let's check for new patches. Press enter"
zypper update -t patch

# Register it again
read -p "And let's register this thing again"
suse_register -d 2 -L /root/.suse_register.log

Changing a folder to a new drive with symbolic links

Written by Robert -

Changing a folder location.

I'm having the following issue. With the upgrade of a new software package, a virtual server is now generating a lot of session and image caches that it stores in the /tmp directory. This is generating problems because the /tmp is on the / drive and it's too small to handle this data properly. Because this is a virtual machine running in ESX, i'm able to attach a new disk without touching hardware. Now I could go and make the drive the / partition is on bigger and extend the filesystem. However, because this is a serious risk and I wish to avoid downtime, I'm going to add a new drive and use symbolic links to place the actual files on a new drive.

This is the current situation. As you can see, there are two "physical" drives attached to this server. sda has the swap, / and the /boot directory, sdb has the /var/log/apache2 directory.

sda      8:0    0    50G  0
├─sda1   8:1    0   9.8G  0 [SWAP]
├─sda2   8:2    0    40G  0 /
└─sda3   8:3    0   205M  0 /boot
sdb      8:16   0    50G  0
└─sdb1   8:17   0    50G  0 /var/log/apache2
fd0      2:0    1     4K  0
sr0     11:0    1  1024M  0

With the df -h command you can easily see that the problem is on the / drive. It's using 82% of it's disk usage at the time of writing this document.

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        40G   26G   12G  82% /
udev            4.0G  104K  4.0G   1% /dev
tmpfs           4.0G     0  4.0G   0% /dev/shm
/dev/sda3       199M   35M  154M  19% /boot
/dev/sdb1        50G  2.6G   45G   6% /var/log/apache2

The first step is to add a new drive within ESX. After contacting an external party regarding the software that is causing this, I've decided to give this server a drive with a space of 160GB.

alt text

To make this drive accessible, I've rebooted the server. It now shows the correct output.

Disk /dev/sdc: 171.8 GB, 171798691840 bytes
255 heads, 63 sectors/track, 20886 cylinders, total 335544320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006a3b5

 Device Boot      Start         End      Blocks   Id  System

Using the command parted /dev/sdc I first created a new partition table: mklabel msdos
After this was done I was able to create a new partition filling the entire drive:

mkpart primary ext3 1MiB 100%

After reviewing this you can see that the new disk has a partition on it:

Model: VMware Virtual disk (scsi)
Disk /dev/sdc: 172GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type     File system  Flags
 1      1049kB  172GB  172GB  primary  ext3         type=83

After creating a partition using parted, you need to format it:

mkfs.ext3 /dev/sdc1

Now that we have a nice partition on the new drive, we need to mount it. I created a new folder on the system called /mnt/sdc:

mkdir /mnt/sdc

Now I can add this drive to the /etc/fstab file so it will be automatically mounted when the server is booted. I did this by opening the /etc/fstab file with my favorite text edit, VIM. I inserted the following line to the file:

/dev/sdc1 /mnt/sdc ext3 defaults 0 0

Because this drive is only holding caching files, I don't need this file to be backed up by any software so I left the default settings like the ones above.

After saving the file, you can remount the drives according to the content of the fstab file using the following command:

mount -o

After typing the df -h command, you see a nice new line:

/dev/sdc1 158G 188M 150G 1% /mnt/sdc

The following three directory's had to be moved from the /tmp folder to the new location called /mnt/sdc. I need to do this by creating a symbolic link from the location in the /tmp folder to the location in the /mnt/sdc folder.

$:/mnt/sdc # service apache2 stop
Shutting down httpd2 (waiting for all children to terminate)

First I move the folders

mv SessionCache/ SessionCache.old/
mv getmomimg/ getmomimg.old
mv imgredir/ imgredir.old

Now let's create some symbolic links:

$:/tmp # ln -s /mnt/sdc/imgredir/ /tmp/imgredir
$:/tmp # ln -s /mnt/sdc/SessionCache/ /tmp/SessionCache
$:/tmp # ln -s /mnt/sdc/getmoming/ /tmp/getmomimg

After using the ls -lah command, you can see that the newly created symbolic links are having one problem. The links are owned by the root user and the root group, instead of the wwwrun user in the www group.

lrwxrwxrwx  1 root root    22 Apr 28 21:28 SessionCache -> /mnt/sdc/SessionCache/
drwxrwxrwx  3 wwwrun www  4.0K Apr 28 17:54 SessionCache.old
lrwxrwxrwx  1 root root    19 Apr 28 21:28 getmomimg -> /mnt/sdc/getmoming/
drwxrwxrwx 18 wwwrun www  4.0K Mar 10  2014 getmomimg.old
lrwxrwxrwx  1 root root    18 Apr 28 21:28 imgredir -> /mnt/sdc/imgredir/
drwxrwxrwx 18 wwwrun www  4.0K Apr 28 18:29 imgredir.old

We fix this by using the chown command.

chown -R wwwrun:www imgredir/
chown -R wwwrun:www getmoming/
chown -R wwwrun:www SessionCache/

We also need to make sure this is done at the /mnt/sdc directory. After doing this we can start apache2 again:

service apache2 start

Once apache2 is started, I can now check that the drive is being populated

hostname:~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        40G  5.7G   32G  16% /
udev            4.0G  108K  4.0G   1% /dev
tmpfs           4.0G     0  4.0G   0% /dev/shm
/dev/sda3       199M   35M  154M  19% /boot
/dev/sdb1        50G  2.6G   45G   6% /var/log/apache2
/dev/sdc1       158G  4.6G  145G   4% /mnt/sdc

Now make sure you test the functionality of the software, in this case the webserver and your done.

Installing a Linux OS on a 2TB+ disk with a BIOS system

Written by Robert -

In most cases, such as Debian/Ubuntu and similar distributions, the installer of the operating system handles partition sizes perfectly. Some distro's don't. If you wish to install a Linux distribution that doesn't properly handle large drives, you might run into this limitation. This will result in a system not recognizing the large hard disks. That makes it impossible to boot. Luckily, there is a solution for this:

First off all, you need to make sure you create a GPT partition table. Make sure you first create a 1MB big EXT4 partition as the first partition (starting at sector 0). Make sure you mark it using biosgrub.

The rest of the disk should be an EXT4 partition but no boot flag can be set.

Install the Linux distribution of your choice.

Install Grub Legacy to the 1MB partition.

Mount the 1MB Grub partition.

Open /boot/grub/menu.lst using your favorite editor

Edit the root partition from hd0,0 to hd0,1 and edit pdev1=sda1 to pdev1=sda2

Save and reboot

Add a SMTP server to use for automatic mails

Written by Robert -

I like to receive e-mails with the result from scripts I write by e-mail. To do this, added a smtp server to the /etc/ssmtp/ssmtp.conf file

root=your e-mail address
hostname=your e-mail address
AuthUser=your e-mail address
AuthPass=your password

Add a PDF printer to CUPS

Written by Robert -

For testing purposes I needed a PDF printer installed on the print server. The server itself is running SUSE Linux Enterprise Server 11.3.

I was able to find the package on the OpenSUSE website.

First I downloaded the file to the /root folder:

wget -P /root

Then I installed the RPM file on the server.

rpm -Uhv /root/cups-pdf-2.6.1-1.1.x86_64.rpm 
# U for upgrade. This will install if the package isn’t there.
# h is for printing hash marks on the screen
# v is verbose, makes the hash marks look a bit nicer.

The combination of the h+v gives a nice output. In this case it’s a small file but it can be useful to see some information on your screen.

Preparing...                ########################################### [100%]

After this I opened the web interface so I could add the CUPS printer.

alt text

alt text

alt text

alt text

alt text

Now the printer is added. By default it writes to the /var/spool/cups-pdf/ folder. In my case, since the test prints come from a Java application that sends it by using the tomcat user, it saves it in /var/spool/cups-pdf/tomcat.

Let's change the location the printer saves to:

In het /etc/cups directory you can see the following files:

total 112K
drwxrwxr-x   6 root lp   4.0K Jan 14 14:17 .
drwxr-xr-x 105 root root  12K Jan  3 07:26 ..
-rw-------   1 root lp     82 Jul 28 12:41 classes.conf
-rw-r--r--   1 root root    0 Apr  3  2014 client.conf
-rw-r--r--   1 root root 1.1K Feb 26  2009 command.types
-rw-r--r--   1 root root 9.4K Aug 27  2013 cups-pdf.conf
-rw-r-----   1 root lp   2.6K Apr  3  2014 cupsd.conf
-rw-r-----   1 root root 4.2K Feb 14  2013 cupsd.conf.default
drwxr-xr-x   2 lp   lp   4.0K Feb 14  2013 interfaces
-rw-r--r--   1 root root 4.7K Feb 14  2013 mime.convs
-rw-r--r--   1 root root 6.4K Feb 14  2013 mime.types
drwxr-xr-x   2 root lp   4.0K Jan 14 14:17 ppd
-rw-------   1 root lp   2.0K Jan 14 14:17 printers.conf
-rw-------   1 root lp   1.8K Jan 14 13:25 printers.conf.O
-rw-r--r--   1 lp   sys   946 Sep 12  2012 pstoraster.convs
-rw-r-----   1 root lp    186 Feb 14  2013 snmp.conf
drwx------   2 root lp   4.0K Feb 14  2013 ssl
drwxrwxr-x   3 root lp   4.0K Jul 28 14:34 yes

As you can see there is a file called cups-pdf.conf. The date is the day the pdf tool was compiled.

I would like to save the documents to the /tmp directory since I only use these to test and I don't wish to fill the system up.

If you open the config file with your favorite text editor, you can see the following part in the document:

### Key: Out
##  CUPS-PDF output directory
##  special qualifiers:
##     $ will be expanded to the user's home directory
##     $ will be expanded to the user name
##  in case it is an NFS export make sure it is exported without
##  root_squash!
### Default: /var/spool/cups-pdf/$
#Out /var/spool/cups-pdf/$

There are a lot of nice things you can do with this. For example, you can have a specific PDF folder in each users folder and print the files automatically in there. In that case you would use: Out $/PDF Or Out /home/$/PDF

In my case, I don't need the files to be in the home directory but I wish to have them in the /tmp folder which get's automatically emptied at a reboot. The end result is:

Out /tmp/ 

Now we can print a test page to look at the result:

alt text

alt text

When you list the files in the tmp folder you can now see a test page:

sles:/tmp # ls -lah
total 87K
drwxrwxrwx  5 root   root    36K Jan 14 17:09 .
drwxr-xr-x 25 root   root   4.0K Jan 14 17:08 ..
-rw-------  1 root   root    47K Jan 14 17:09 Test_Page.pdf

How to back up a Linux system over SMB the easy way

Written by Robert -

At work we are changing backup environments and I needed a simple way to create full backups of a Linux server to a Windows share. You can create the fileshare on another Linux machine and use the rest of the guide as is.

The first thing I've done is created a share on the Windows server where there is enough space to place the backups. I've secured it with a username and password. There are two ways to do this. The first is to create a backup locally and then send it over, the second is to directly send it over. I've gone for the second option in this situation. You can also use this script with a share on a Linux server. My advise is to create one share per server and a specific username and password for each server.

First create a folder where you want to mount the Windows server to.

You can change the name to anything you like. I used the name server for this example.

mkdir /mnt/server

Mount the Windows share to folder created above:

mount -t cifs //ServerName/LinuxBackup -o username=windowsusername,password=windowspassword /mnt/server

When I look at the df output I can see that the server is mounted:

Filesystem               Size  Used Avail Use% Mounted 
onrootfs              20G  3.8G   15G  20% 
//dev/sda2                 20G  3.8G   15G  20% 
//dev/sda3                 53G  592M   51G   2% 
/home//ServerName/LinuxBackup  137G   27G  110G  20% /mnt/server

Now I know that the fileshare works from Linux, let's unmount it:

umount /mnt/server

Now it's time to add this line to the /etc/fstab file to make sure it automatically mounts it at boot:

//ServerName/LinuxBackup /mnt/server cifs username=windowsusername,password=windowspassword,iocharset=utf8,sec=ntlm 0 0 

When done, save the file and use the line below to mount the added drive to your Linux machine:

mount -a 

Creating the first backup:

I like to create the first backup manually to make sure everything works as it should. Make sure you run this command as root or use the sudo command (super user do). Otherwise, critical parts of the system will not be backed up.

First make sure it’s mounted:

mount -t cifs //ServerName/LinuxBackup -o username=windowsusername,password=windowspassword /mnt/server 1>/root/mount1.txt 2>/root/mount2.txt

Let’s create a folder with the date of today:

cd /mnt/server  
mkdir $(date '+%d-%b-%Y')

Then we start the first backup:

tar cvpjf /mnt/server/$(date '+%d-%b-%Y')/backup.tar.bz2 --exclude=/proc --exclude=/dev --exclude=/lost+found --exclude=/mnt --exclude=/sys --exclude=/root/ /  1>/mnt/server/$(date '+%d-%b-%Y')/backup.txt 2>/mnt/server/$(date '+%d-%b-%Y')/backuperror.txt

This creates a tar archive with the following options:

c = create
v = verbose, not required but might be helpful if you want to see what it backed up.
p = absolute names. This makes sure all path names are correct.
j = creates a bzip2 archive to compress the data as much as possible. This can save a lot of storage
f = output to file

The following folders are excluded from the backup:

/proc = Virtual file system, this is populated at boot. No need to create a backup of this.
/dev = Device files. This folder contains links to your hardware. No need to create a backup of this.
/lost+found = Trash, you can make a backup of this if you really want to. I've excluded this.
/mnt = mounted filesystems. In my situation there is no need for this. If you have mounted filesystems that you wish to backup, remove this but make sure you do exclude the Windows share you added earlier.
/sys = Virtual file system, this is populated at boot. No need to create a backup of this.
/root = we will place the backup results here. If you want you can put these in another folder and exclude that specific folder.

Output 1 shows the files that are backed up. You can store this information at the location of the backup, locally or destroy the information by sending it to /dev/null Output 2 shows the errors.

I added this part to the file names:

$(date '+%d-%b-%Y')

This creates a file with the output of the echo's above in the name. If you use echo with this command you will see that it outputs the date:

 # echo $(date '+%d-%b-%Y')  

This makes sure the backup writes to a new folder each time it runs with the current date. This prevents the backup to overwrite the old backup every day.

After this we can unmount the share:

umount /mnt/server

Automatically create the backup.

Here is the fun part. I want to have a nightly backup of my system.

Everything I’ve written above resulted in the following batch script:

umount /mnt/server
mount -t cifs //ServerName/LinuxBackup -o username=windowsusername,password=windowspassword /mnt/server 1>/root/backupresult/mount1.txt 2>/root/backupresult/mount2.txt
cd /mnt/server
mkdir $(date '+%d-%b-%Y')
tar cvpjf /mnt/server/$(date '+%d-%b-%Y')/backup.tar.bz2 --exclude=/proc --exclude=/dev --exclude=/lost+found --exclude=/mnt --exclude=/sys --exclude=/root/backupresult/ /  1>/mnt/server/$(date '+%d-%b-%Y')/backup.txt 2>/mnt/server/$(date '+%d-%b-%Y')/backuperror.txt
umount /mnt/server

I placed this in a batch file called backup.tar and placed it in the /root directory. After that I made it executable by typing the following command:

chmod +x /root/backup.tar

Now you can open the crontab with the edit option:

sudo crontab -e

Because I want to start the backup at midnight I used the following command:

@midnight /root/backup.tar 


If you want to restore the backup, mount the file share, browse to it and execute the following command:

tar xvpfj backup.tar.bz2 -C /

Then update grub according to your distro and your done :)

Finding system information in Linux

Written by Robert -

There are several ways to find information on your system.

Let's start with a common tool to find out what kernel you are running and some other information. This tool is called uname. uname is a common tool within Linux to view simple data. the most common option is the -a which shows "all" information within uname.

Here is the output of one of my systems:

$ uname -a
Linux <hostname> 3.0.76-0.11-default #1 SMP Fri Jun 14 08:21:43 UTC 2013 (ccab990) x86_64 x86_64 x86_64 GNU/Linux

As you can see the device is a Linux server. It also shows that this server is running the 3.0.76 kernel build in 2013 and that it's a 64 bit GNU/Linux system. Now this is nice to know, but it doesn't show all of the items you might wish to know.

If you manage a lot of different servers it can also be nice to know what version of Linux it's running. Different Linux systems have a different name for the file which contains this information, but they all end in release. Because of this, you can use the cat (concatenate) command to read the output of the file.

$ cat /etc/*release
SUSE Linux Enterprise Server 11 (x86_64)

Another tool is lspci, which shows all the PCI devices attached to your computer. Here is small sample of the output:

00:05.4 PIC: Intel Corporation Ivytown IOAPIC (rev 04)
00:11.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Virtual Root Port (rev 05)
00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #2 (rev 05)
00:1c.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Root Port 1 (rev b5)
00:1c.7 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Root Port 8 (rev b5)
00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #1 (rev 05)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
00:1f.0 ISA bridge: Intel Corporation C600/X79 series chipset LPC Controller (rev 05)
00:1f.2 IDE interface: Intel Corporation C600/X79 series chipset 4-Port SATA IDE Controller (rev 05)
01:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 05)
01:00.1 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200EH
01:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 05)
01:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 02)
02:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01)
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)

To find USB devices you can use the corresponding lsusb:

$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 003: ID 0424:2660 Standard Microsystems Corp.

If you wish to know more about the CPU in the device, there is, not surprising, the lscpu:

$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                20
On-line CPU(s) list:   0-19
Thread(s) per core:    1
Core(s) per socket:    10
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Stepping:              4
CPU MHz:               3000.000
BogoMIPS:              5985.44
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              25600K
NUMA node0 CPU(s):     0-9
NUMA node1 CPU(s):     10-19

You can see this is a 64 bit CPU. You can also see that this machine has two physical CPU's with 10 cores per socket. You can also see the GHz the processors are running at and the cache.

If you want a full hardware report you can run hwinfo but this tool generates a LOT of data. If I have to run this to find information I usually output the information to a file and then use cat with grep to search for the information.

To find SCSI devices you can use lsscsi:

$ lsscsi
[0:0:0:0]    disk    HP       LOGICAL VOLUME   4.68  /dev/sda
[0:3:0:0]    storage HP       P420i            4.68  -
[2:0:0:0]    cd/dvd  hp       DVDRAM GT80N     EA02  /dev/sr0

But I prefer to find drives with lsblk or fdisk.

$ lsblk
NAME                              MAJ:MIN RM   SIZE RO MOUNTPOINT
sda                                 8:0    0   7.7T  0
├─sda1                              8:1    0   128G  0 [SWAP]
├─sda2                              8:2    0   200G  0 /
├─sda3                              8:3    0   203M  0 /boot
└─sda4                              8:4    0   7.3T  0
  ├─vgroot-data (dm-0)            253:0    0     1T  0 /data
  ├─vgroot-dumps (dm-1)           253:1    0   500G  0 /dumps
  └─vgroot-lvm (dm-2) 253:2    0     1T  0 /lvm
sr0                                11:0    1  1024M  0

Here is an fdisk output (-l is list current disks):

$ fdisk -l
Disk /dev/sda: 8401.5 GB, 8401467301888 bytes
255 heads, 63 sectors/track, 1021420 cylinders, total 16409115824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 1835008 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            3584   268445183   134220800   82  Linux swap / Solaris
/dev/sda2       268445184   687887871   209721344   83  Linux
/dev/sda3   *   687887872   688303615      207872   83  Linux
/dev/sda4               1           1           0+  ee  GPT

Disk /dev/mapper/vgroot-data: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 1835008 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vgroot-dumps: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders, total 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 1835008 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vgroot-lvm: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 1835008 bytes
Disk identifier: 0x00000000

For drive information it might be helpful to use the df (disk free) with the human readable option to check if a drive is getting filled up:

 df -h
Filesystem                          Size  Used Avail Use% Mounted on
/dev/sda2                           197G  9.2G  178G   5% /
udev                                 64G  184K   64G   1% /dev
tmpfs                                64G     0   64G   0% /dev/shm
/dev/sda3                           197M   39M  149M  21% /boot
/dev/mapper/vgroot-data             1.0T   96G  928G  10% /data
/dev/mapper/vgroot-dumps            500G   27G  474G   6% /dumps
/dev/mapper/vgroot-lvm  1.0T  240G  784G  24% /lvm

To view the RAM status you can either use the command "free -m" or check top for active data:

free -m:

$ free -m
             total       used       free     shared    buffers     cached
Mem:        129161     128765        395          0        145      83058
-/+ buffers/cache:      45562      83598
Swap:       131074        227     130847

$ top
top - 17:27:38 up 34 days,  4:00,  1 user,  load average: 0.16, 0.26, 0.30
Tasks: 245 total,   1 running, 244 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.1%us,  0.2%sy,  0.0%ni, 98.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:    129161M total,   128763M used,      397M free,      145M buffers
Swap:   131074M total,      227M used,   130847M free,    83058M cached

There are a lot of readable files within the /proc folder that can show you system information. You can for example use 'cat /proc/cpuinfo' for more information about your CPU.

If you still haven't found the information you are looking for, there is always dmidecode.

How to back up a physical Linux machine with Veeam.

Written by Robert -

Veeam is known as an excellent enterprise backup system that can backup entire datacenters. It handles tapes, offsite backups to cloud and many more features that are perfect for almost any company. It’s not using agents like some of the competitors so there is one less thing that can go wrong. There is a drawback though. You aren’t able to create and restore backups with Veeam directly if you are using physical machines. Even when it can’t do that, there is no reason not to use Veeam, even if you still have one or two physical machines left. It’s actually pretty easy to use Veeam with physical Linux machines. It won’t be exactly the same as using it directly, but it does work.

There are two ways to do this. One is writing the backup to a tar file on the Veeam Backup & Replication server and use file backup within the backup software. This works, unless you want to use a Veeam cloud provider for offsite backups. If you want to use that, I suggest to create a virtual machine, either in Hyper-V or ESX, and save the tar to that server. Then you can use Veeam to put that virtual machine offsite on a cloud, on tape or anywhere you want it. When you want to recover a server from a backup, you can install the required OS and put the tar file back to the / drive. Personally, even if you don't wish to use the Cloud Connect right now, it might be wise to do this right away. Requirements can change and changing the backup procedure can be quite some work. If you start using Veeam you might as well start using it in a way you can use this feature in the near future.

alt text

Using GUI applications over SSH

Written by Robert -

There are usually multiple ways to achieve the same goal. In most cases I prefer to use a command line version instead of a GUI. It's faster and usually easy to use. There are cases, for example, installing a new virtual machine, where you want to use a graphical way to view the software. To do this, X11 must be installed on both the client and server. If this doesn't work, make sure "X11Forwarding yes" is set in the /etc/ssh/sshd_config file.


ssh -X servername

Now you should be able to start graphical applications over an SSH server. This can be a bit laggy if it's over the internet, but it works.


Windows takes a just a little bit longer to make sure it's working.

First of all, you are going to need the latest version of Xming. Install the application and enable X11 forwarding.

Now open Putty and go to Connection-> SSH ->X11. There you can select 'Enable X11 forwarding'

alt text

Log in to the Linux box and you can graphical software by using X11 forwarding.

alt text

Keep in mind that X11 only works under you used to log in with SSH. If you change username under your session (for example, su or sudo -i) you will lose the ability to run X11 software. It does work when using the normal sudo option.

Installing a new kernel in a Ubuntu based distribution

Written by Robert -

I was having some video card driver issues which seem to be fixed in the latest Linux Kernel 4.1.0. Because of that, I wanted to upgrade my kernel to the latest version.

First, go to the tmp folder:
cd /tmp

Create an empty folder for the new kernel, in this case I created a folder for 410 and enter it:
mkdir kernel410
cd kernel410/

Download the kernel version required. In this case I downloaded the latest stable kernel available:

Install the three files required for the new kernel:
sudo dpkg -i *.deb

Update grub if everything was successful:
sudo update-grub

After a reboot, you can now use a newer kernel version.

Nightly mail with disk and ram usage

Written by Robert -

I'm maintaining Linux servers and I want to know what the disk and ram usage is. I want to get this in my mailbox every night to prevent system issues with disk usage. To do this, I created a simple batch script. Feel free to copy this and use it yourself.

echo This is the disk usage on $HOSTNAME > /tmp/mail_report.log  
# This line adds a comment regarding the next command 
# and it's hostname and adds it to a new file called mail_report.log in the /tmp folder  
# The $HOSTNAME variable is used in Linux as the hostname of the server.  
# Keep in mind that any old file (so in this case from the day before), 
# is overwritten with a new file  
# This is due to the singe > symbol  
# The line is added for readability  
 echo >> /tmp/mail_report.log  
# This creates an empty line in the mail_report.log file df -h >> /tmp/mail_report.log  
# This adds the output of df -h to the mail_report.log file   
echo >> /tmp/mail_report.log  
echo >> /tmp/mail_report.log  
echo >> /tmp/mail_report.log  
# This creates 3 empty lines to the mail_report.log file   
echo ram usage on $HOSTNAME >> /tmp/mail_report.log  
# This adds a line regarding the next command to the mail_report.log, for readability.   
echo >> /tmp/mail_report.log free -m >> /tmp/mail_report.log  
# This adds a line with the memory usage to the mail_report.log file   
echo >> /tmp/mail_report.log mail -s "disk and ram report $HOSTNAME " < /tmp/mail_report.log   
# -s specifies the subject  
# after that comes the mail address  
# the < is for inputting the /tmp/mail_report.log file  

I created a file with the content above and named it

After that, I made the file executable by using the chmod +x command. server:~ # chmod +x

Test the functionality of the created file. server:~ # ./

Add it to the crontab. server:~ # crontab -e

You can either:

Use it in the "old" way by using the minute / hour / day / month / year system.
In this case I want to run it at 2 AM daily. As a result I have to specify it as minute 0, hour 2, any, any, any command. This looks like:
0 2 * /root/
If you run this on a recent system you easily run it at midnight by using
@midnight /root/
See the man page for crontab if you wish to know more options.


Written by Robert -

Most current Linux distributions don't use runlevels anymore. Runlevels are used by the sysvinit system and it's being replaced in most distributions by systemd. Because a lot of servers and people that use Linux Mint still work with sysvinit, I've decided to write a small blog post about it. If you are looking for the runlevel system within systemd you would have to read the manual. I will write an article about that in the future. A runlevel is simply a software configuration. You can assign a runlevel to a server and even create your own. There are several commonly used runlevels available within Linux systems. Runlevels can have different results in different distributions. Here are some common runlevels:

0 | Halt (standardized)
1 | Single user mode 
2 | Debian default
3 | RedHat/Suse text mode
4 | Not commonly used except in Slackware 
5 | RedHat/Suse Graphical mode, Unused in Slackware
6 | Reboot (standardized)
S | Single user mode (standardized)

Now there are several reasons why you would want to use a different runlevel. One valid reason is to start a server in a text based mode. Most servers don't use a graphical interface for management. If you don't have the need for a graphical interface you can use the resources for more practical stuff, such as the task the server is assigned to do.

Viewing the current runlevel is different in different systems. For example, in a Debian based distribution I would use the 'runlevel' command but in an Arch or RedHat based distribution you can find the current runlevel in the /etc/inittab file.

SSH Tunnel for port forwarding

Written by Robert -

Within Linux there are several services available that use a web interface for management. In most of these cases, best practice is to not share those interfaces with the end users because they might abuse the system. Luckily, you can easily block those ports on the firewall of the server and use an SSH tunnel to provide port forwarding to your local machine.

Here are two examples using the CUPS system as an example. CUPS is an acronym for Common Unix Printing System and is developed by Apple. The default port for CUPS is port 631.

From Windows

If you use a Windows device to manage the CUPS on port 631, you can use Putty with SSH port forwarding and your own browser.

In Putty, go to Connection -> SSH -> Tunnels. You need to assign a local port to the remote port of the server. In this case, the local port to use is 20631 and the port on the server is port 631. It wil resemble the image below.

Alt text

Now you can open http://localhost:20631 in your browser and manage the CUPS system.

Alt text

From Linux

It's actually a lot easier to do this from within Linux, if you can remember the command for it. The fastest way to do this is the following:

ssh account@server -L 20631:localhost:631

The Account part is your useraccount at the server and the server is either the IP or the hostname of the server. -L is the part that does the port forwarding. It binds two ports together, first the port on your local machine, then localhost, then the port on the remote machine.

Alt text

After typing in your password you can start your browser. Alt text