Author: Tan Nguyen

How to set up multiple Tor instances with Polipo in Windows

How to set up multiple Tor instances with Polipo in Windows

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

By default, Tor opens a SOCK proxy on port 9050 that allow internet traffic to go through to access Tor network.

In this post, we will show how to set up multiple Tor instances on the same Windows machine.

If you want to set up multiple Tor instances with Polipo on Linux, read this post instead.

Install Tor

Installing Tor on Windows is easy.

Go to Tor official download page to download Tor.

Normally, if you download Tor Browser package, you will have a Tor Browser that is already configured to use Tor as a proxy to access the internet.

In our case, we will set up multiple instances of Tor as multiple proxies, which means we need Tor only, so you can download the Expert Bundle.

After you download it, unzip it to wherever you like. The extracted folder should contains:

  • <parent folder>/Tor: contains executable files (exe and dlls)
  • <parent folder>/Data/Tor: the folder contains the data that Tor use to look up geo location of IPs.

If you run tor.exe in the /Tor folder, you can already start a Tor instance now.

Install Polipo

Tor proxy supports SOCKS protocol but does not support HTTP proxy protocol.

To help Tor function as a proxy protocol, we can use Polipo to make a tunnel where Polipo will open a HTTP proxy and transfer the packets between its HTTP proxy and Tor’s SOCKS proxy. That way, an application can leverage Tor’s power even if that application can only communicate through HTTP proxy.

You can download Polipo here.

After downloading Polipo, just extract the zip file. We only need that polipo.exe file.

Set up multiple instances folder structure

In order to run multiple instances, we must set up different port and data folder for each instance using command line arguments.

My set up goes like this

  • MultiTor\bin\Tor: Tor executable folder (contains tor.exe and dlls)
  • MultiTor\bin\Data: Tor geoip folder (extracted from downloaded Tor package)
  • MultiTor\bin\Polipo: Polipo executable folder (contains polipo.exe)
  • MultiTor\data\tor-10001: data folder of Tor instance listening on port 10001
  • MultiTor\data\tor-10002: data folder of Tor instance listening on port 10002

In the above set up, I will use the same executable file for all instances, but different data folders for each instance.

You can have you own set up, as long as the data folders are different among instances.

Start the first instance

Now, to start the first instance, I switch to MultiTor folder and run the following command

bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:10001 CONTROLPort 127.0.0.1:20001 DATADirectory data\tor-10001

The command above will start a Tor instance that has

  • 10001 as SOCKS proxy port
  • 20001 as CONTROL port
  • data\tor-10001 as data folder

We will also start a Polipo instance that tunnels through that Tor instance

bin\Polipo\polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1"

The command above will start a Polipo instance that has

  • 30001 as HTTP proxy port
  • talks to Tor proxy on port 10001

Start the next instances

To start the second instance, first we have to edit the command a little bit so that it will start the first instance in a new window, releasing the current command line cursor.

To do that, add start command at the beginning of the script

start bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:10001 CONTROLPort 127.0.0.1:20001 DATADirectory data\tor-10001
start bin\Polipo\polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1"

Now we can start the second instance following the same pattern

start bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:10002 CONTROLPort 127.0.0.1:20002 DATADirectory data\tor-10002
start bin\Polipo\polipo socksParentProxy="127.0.0.1:10002" proxyPort=30002 proxyAddress="127.0.0.1"

Note that the port numbers and the data folder have been changed for the second instance.

We can start as many instances as we want in this way.

Automate the task

To start a lot of instances, we can make a .bat file to automate the task as following

start_all_tor.bat

CD C:\Tools\MultiTor\
SETLOCAL
SETLOCAL ENABLEDELAYEDEXPANSION
FOR /L %%G IN (10001,1,10100) DO (
 SET /a sp=%%G+0
 SET /a cp=%%G+10000
 echo !sp!
 echo !cp!
 mkdir data\tor-!sp!
 start bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:!sp! CONTROLPort 127.0.0.1:!cp! DATADirectory data\tor-!sp!
)
ENDLOCAL

start_all_polipo.bat

CD C:\Tools\MultiTor\
SETLOCAL
SETLOCAL ENABLEDELAYEDEXPANSION
FOR /L %%G IN (10001,1,10100) DO (
 SET /a sp=%%G+0
 SET /a pp=%%G+20000
 echo !sp!
 echo !cp!
 start bin\Polipo\polipo socksParentProxy="127.0.0.1:!sp!" proxyPort=!pp! proxyAddress="127.0.0.1"
)
ENDLOCAL

The first batch script will start 100 Tor proxy instances that listen on port 10001-10100 and have control port from 20001-20100 with data folder from data\tor-10001 to data\tor-10100

The second batch script will start 100 Polipo proxy intances that listen on port 30001-30100 and talks to Tor proxy instances on port 10001-10100 correspondingly.

To stop all the instances, you can run a script like this

stop_all_tor.bat

taskkill /IM tor.exe /F

stop_all_polipo.bat

taskkill /IM polipo.exe /F
How to set up multiple Tor instances with Polipo on Linux

How to set up multiple Tor instances with Polipo on Linux

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

By default, Tor opens a SOCK proxy on port 9050 that allow internet traffic to go through to access Tor network.

In this post, we will show how to set up multiple Tor instances on the same Linux machine.

If you want to set up multiple Tor instances with Polipo on Windows, read this post instead.

Install Tor

On Ubuntu

$ apt update
$ apt install tor

After that, Tor should be accessible via /usr/bin/tor

Install Polipo

Tor proxy supports SOCKS protocol but does not support HTTP proxy protocol.

To help Tor function as a proxy protocol, we can use Polipo to make a tunnel where Polipo will open a HTTP proxy and transfer the packets between its HTTP proxy and Tor’s SOCKS proxy. That way, an application can leverage Tor’s power even if that application can only communicate through HTTP proxy.

On Ubuntu

$ apt update
$ apt install polipo

Start the first instance

After Tor is installed, we can start Tor right away by running /usr/bin/tor.

However, to run multiple instances of Tor, we still have some extra tasks to do.

First of all, when a Tor instance runs, it will need the following things set up

  • a SOCKSPort, where its proxy listens on
  • a CONTROLPort, where it listens to commands from user
  • a DATADirectory, where it use to save data.

Different instances must have these things set up differently.

Fortunately, Tor allows us to set up these things using command line arguments like this

$ /usr/bin/tor SOCKSPort 10001 CONTROLPort 20001 DATADirectory /tmp/tor10001/data

You can also specify which IP address for your Tor instance to listen on

$ /usr/bin/tor SOCKSPort 127.0.0.1:10001 CONTROLPort 127.0.0.1:20001 DATADirectory /tmp/tor10001/data

Let’s start the first instance now

$ mkdir -p /tmp/tor10001/data
$ /usr/bin/tor SOCKSPort 10001 CONTROLPort 20001 DATADirectory /tmp/tor10001/data

We will also start a Polipo instance that tunnels through that Tor instance

$ polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1"

The command above will start a Polipo instance that has

  • 30001 as HTTP proxy port
  • talks to Tor proxy on port 10001

Start the next instances

In order to start the next instnace, first you have to relaunch the first instance to run in background with nohup command

$ mkdir -p /tmp/tor10001/data
$ nohup /usr/bin/tor SOCKSPort 10001 CONTROLPort 20001 DATADirectory /tmp/tor10001/data &
$ nohup polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1" &

nohup also keeps your Tor instance running after you log out of your ssh session.

You can now start the next instances following the same pattern, just change the arguments appropriately

$ mkdir -p /tmp/tor10002/data
$ nohup /usr/bin/tor SOCKSPort 10002 CONTROLPort 20002 DATADirectory /tmp/tor10002/data &
$ nohup polipo socksParentProxy="127.0.0.1:10002" proxyPort=30002 proxyAddress="127.0.0.1" &

Notice the params has changed to 10002 and 20002 correspondingly.

Automate the task

If you want to start a lot of instances, you can make a script to start all the instances automatically like this

start_all_tor.sh

#!/bin/bash

a=10001
b=20001
n=10100

echo "Start multiple Tors"
echo "Begin port " $a
echo "End port " $n

while [ $a -le $n ]
do 
    echo "Start Tor on port" $a
    mkdir -p /tmp/tor$a/data
    nohup /usr/bin/tor SOCKSPort $a CONTROLPort $b DATADirectory /tmp/tor$a/data &
    a=$(($a + 1)) 
    b=$(($b + 1)) 
done

The above script will start 100 Tor instances with SOCKSPort from 10001 to 10100, CONTROLPort from 20001 to 20100 with corresponding data folders.

start_all_polipo.sh

#!/bin/bash
a=10001
b=30001
n=10100
echo "Start multiple Polipo "
echo "Begin port " $a
echo "End port " $n

while [ $a -le $n ]
do
    echo "Start Polipo " $a
    nohup polipo socksParentProxy="127.0.0.1:$a" proxyPort=$b proxyAddress="127.0.0.1" &
    a=$(($a + 1))
    b=$(($b + 1))
done

To stop all the instances, you can run a script like this

stop_all_tor.sh

#!/bin/bash
ps aux | grep tor | awk '{print $2}' | xargs kill -9

stop_all_polipo.sh

#!/bin/bash
ps aux | grep polipo | awk '{print $2}' | xargs kill -9
How to monitor any command output in real-time with watch command on Linux

How to monitor any command output in real-time with watch command on Linux

On Linux, watch command helps you refresh the output of a command every second, which functions just like a real-time monitoring. Let’s look at some practical examples.

Monitor log output

One of the most common scenarios in Linux is looking at the log output of an application using the tail command

$ tail -n 10 output.log

This command will output the last 10 lines of the output.log file to the terminal console.

Let’s say you want to check if there is anything new appended to the log file, you will have to re-run the command every time. To do this automatically, just add a watch command at the beginning of the previous command like this

$ watch tail -n 10 output.log

By default, watch command refresh every 2 seconds. If you want to change the interval, add a -n argument like this

$ watch -n 1 tail -n 10 output.log

This will make the command refresh every 1 second, showing the last 10 lines of the log file.

Monitor disk I/O status

You can monitor disk I/O with iostat command.

First, you have to install iostat, which is included in sysstat package

On CentOS

$ yum install sysstat

On Ubuntu

$ apt install sysstat

Then run the command

$ iostat

Monitor I/O status in real-time

$ watch iostat

Monitor network connection

Let’s say you are running a web server on port 80 and want to check how many clients are connecting to the server.

First, we list all connection to the server using netstat command

$ netstat -naltp

Then we filter those connections to port 80 that are established

$ netstat -naltp | grep :80 | grep ESTABLISHED

If you want to count the connections, add a -c argument

$ netstat -naltp | grep :80 | grep ESTABLISHED -c

To refresh the command in real-time, add watch at the beginning

$ watch "netstat -naltp | grep :80 | grep ESTABLISHED -c"

Notice that in the command above, the command to be watched is wrapped inside quotes. This is because without the quotes, the grep command is confused which output stream to consume: watch command or netstat command. To clear out the confusion, we have to put the command we want to “watch” in quotes

Monitor disk usage and disk free space

To monitor disk free space

$ watch df -h

To monitor disk usage

$ watch du -c -h -d 1

Monitor how many instances of a process is running

To monitor how many instances of a process is running, use a ps command with watch

$ watch "ps aux | grep nginx - c"
Top helpful Linux commands for beginners

Top helpful Linux commands for beginners

Useful Command Line Keyboard Shortcuts

The following keyboard shortcuts are incredibly useful and will save you loads of time:

  • CTRL + U – Cuts text up until the cursor.
  • CTRL + K – Cuts text from the cursor until the end of the line
  • CTRL + Y – Pastes text
  • CTRL + E – Move cursor to end of line
  • CTRL + A – Move cursor to the beginning of the line
  • ALT + F – Jump forward to next space
  • ALT + B – Skip back to previous space
  • ALT + Backspace – Delete previous word
  • CTRL + W – Cut word behind cursor
  • CTRL + Insert – Copy selected text
  • Shift + Insert – Pastes text into terminal

sudo !!

If you don’t know it yet, sudo (super user do) is the prefix to run a command with an elevated privilege (like Run As Administrator in Windows).

sudo !! is a useful trick to save you from retyping the previous command that is “permission denied”.

For example, imagine you have entered the following command:

$ apt-get install ranger

The words “Permission denied” will appear unless you are logged in with elevated privileges.

Now, instead of typing this

$ sudo apt-get install ranger

Just type…

$ sudo !!

… and the previous denied command will be executed with sudo privileges.

Pausing Commands And Running Commands In The Background

I have already written a guide showing how to run terminal commands in the background.

So what is this tip about?

Imagine you have opened a file in nano as follows:

$ sudo nano abc.txt

Halfway through typing text into the file you realise that you quickly want to type another command into the terminal but you can’t because you opened nano in foreground mode.

You may think your only option is to save the file, exit nano, run the command and then re-open nano.

All you have to do is press CTRL + Z and the foreground application will pause and you will be returned to the command line. You can then run any command you like and when you have finished return to your previously paused session by entering “fg” into the terminal window and pressing return.

An interesting thing to try out is to open a file in nano, enter some text and pause the session. Now open another file in nano, enter some text and pause the session. If you now enter “fg” you return to the second file you opened in nano. If you exit nano and enter “fg” again you return to the first file you opened within nano.

Use nohup To Run Commands After You Log Out Of An SSH Session

The nohup command is really useful if you use the ssh command to log onto other machines.

So what does nohup do?

Imagine you are logged on to another computer remotely using ssh and you want to run a command that takes a long time and then exit the ssh session but leave the command running even though you are no longer connected then nohup lets you do just that.

For example, if I started downloading a large file on a remote host using ssh without using the nohup command then I would have to wait for the download to finish before logging off the ssh session and before shutting down the laptop.

To use nohup all I have to type is nohup followed by the command as follows:

$ nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &

Running A Linux Command ‘AT’ A Specific Time

The ‘nohup’ command is good if you are connected to an SSH server and you want the command to remain running after logging out of the SSH session.

Imagine you want to run that same command at a specific point in time.

The ‘at’ command allows you to do just that. ‘at’ can be used as follows.

$ at 10:38 PM Fri
at> cowsay 'hello'
at> CTRL + D

The above command will run the program cowsay at 10:38 PM on Friday evening.

The syntax is ‘at’ followed by the date and time to run.

When the at> prompt appears enter the command you want to run at the specified time.

The CTRL + D returns you to the cursor.

There are lots of different date and time formats and it is worth checking the man pages for more ways to use ‘at’.

Man Pages

Man pages give you an outline of what commands are supposed to do and the switches that can be used with them.

The man pages are kind of dull on their own. (I guess they weren’t designed to excite us).

You can however do things to make your usage of man more appealing.

$ export PAGER=most

You will need to install ‘most; for this to work but when you do it makes your man pages more colourful.

You can limit the width of the man page to a certain number of columns using the following command:

$ export MANWIDTH=80

Finally, if you have a browser available you can open any man page in the default browser by using the -H switch as follows:

$ man -H 

Note this only works if you have a default browser set up within the $BROWSER environment variable.

Use htop To View And Manage Processes

Which command do you currently use to find out which processes are running on your computer? My bet is that you are using ps and that you are using various switches to get the output you desire.

Install htop. It is definitely a tool you will wish that you installed earlier.

htop provides a list of all running processes in the terminal much like the file manager in Windows.

You can use a mixture of function keys to change the sort order and the columns that are displayed. You can also kill processes from within htop.

To run htop simply type the following into the terminal window:

$ htop

Navigate The File System Using ranger

If htop is immensely useful for controlling the processes running via the command line then ranger is immensely useful for navigating the file system using the command line.

You will probably need to install ranger to be able to use it but once installed you can run it simply by typing the following into the terminal:

$ ranger

The command line window will be much like any other file manager but it works left to right rather than top to bottom meaning that if you use the left arrow key you work your way up the folder structure and the right arrow key works down the folder structure.

It is worth reading the man pages before using ranger so that you can get used to all keyboard switches that are available.

Cancel A Shutdown

So you started the shutdown either via the command line or from the GUI and you realised that you really didn’t want to do that.

$ shutdown -c

Note that if the shutdown has already started then it may be too late to stop the shutdown.

Another command to try is as follows:

$ pkill shutdown

Killing Hung Processes The Easy Way

Imagine you are running an application and for whatever reason it hangs.

You could use ps -ef to find the process and then kill the process or you could use htop.

There is a quicker and easier command that you will love called xkill.

Simply type the following into a terminal and then click on the window of the application you want to kill.

$ xkill

What happens though if the whole system is hanging?

Hold down the alt and sysrq keys on your keyboard and whilst they are held down type the following slowly:

$ REISUB

This will restart your computer without having to hold in the power button.

Download Youtube Videos

Generally speaking most of us are quite happy for Youtube to host the videos and we watch them by streaming them through our chosen media player.

If you know you are going to be offline for a while (i.e. due to a plane journey or travelling between the south of Scotland and the north of England) then you may wish to download a few videos onto a pen drive and watch them at your leisure.

All you have to do is install youtube-dl from your package manager.

You can use youtube-dl as follows:

$ youtube-dl url-to-video

You can get the url to any video on Youtube by clicking the share link on the video’s page. Simply copy the link and paste it into the command line (using the shift + insert shortcut).

Download Files From The Web With wget

The wget command makes it possible for you to download files from the web using the terminal.

The syntax is as follows:

$ wget path/to/filename

For example:

$ wget http://sourceforge.net/projects/antix-linux/files/Final/MX-krete/antiX-15-V_386-full.iso/download

There are a large number of switches that can be used with wget such as -O which lets you output the filename to a new name.

In the example above I downloaded  AntiX Linux from Sourceforge. The filename antiX-15-V_386-full.iso is quite long. It would be nice to download it as just antix15.iso. To do this use the following command:

$ wget -O antix.iso http://sourceforge.net/projects/antix-linux/files/Final/MX-krete/antiX-15-V_386-full.iso/download

Downloading a single file doesn’t seem worth it, you could easily just navigate to the webpage using a browser and click the link.

If however you want to download a dozen files then being able to add the links to an import file and use wget to download the files from those links will be much quicker.

Simply use the the -i switch as follows:

$ wget -i /path/to/importfile

For more about wget visit http://www.tecmint.com/10-wget-command-examples-in-linux/.

Monitor your server performance with top command

When your machine is running slow and you want to find out what are the causes or what processes are eating a lot of resource, just type top in the command line:

$ top

This will bring up a real-time resource monitoring screen showing CPU usage, disk I/O bottlenecks, RAM usage of every running process and the whole system in general.

To show the full path of running processes, press “c”. To show only the processes’ names, press “c” again.

To show usage of each separate CPU core, press “1”. To show CPU usage as a total, press “1” again.

To kill a process, press “k”, type in the process’s id and press “ENTER”.

You can find more control keys by pressing “h” to bring up the help document. Go back to “top” screen by pressing “ESC”.

Monitor your disk free space with df command

To know how much free space are available on your disks, type

$ df -h

-h argument tells the df command to show the size in a human readable format (2.0GB instead of 2000000000).

Monitor your disk usage of each folder with du command

To know how much space is used up by each file/folder, type

$ du -h -c -d 1 ./*
  • -h tells the command to show the size in a human readable format  (2.0GB instead of 2000000000).
  • -c tells the command to show the total size of the folder, including subfolders and files (instead of showing that folder size only)
  • -d 1 tells the command to list the size of folders and subfolders up to child level 1. You can change it to 0 or 2 or whatever you are interested in.
  • ./* tells the command to do this to all the folders and files that are the children of current folder. You can change this param to any path you want, with or without wildcard (*).

Monitor your server traffic with iftop command

To monitor in real-time how much traffic is going in and out of your server, for each external IP, and each port, type the following command

$ iftop -n -B

If you get permission denied, try running it with sudo privilege.

  • -n tells the command not to try to translate the remote host to domain names but instead leave those as IPs.
  • -B tells the command to show traffic in bytes instead of bits.

If you don’t have iftop installed in your system, you can install it

For Ubuntu users

$ apt install iftop

For Centos/Fedora users

# Install epel-release if not yet available
$ yum install epel-release
# Install iftop
$ yum install iftop

Monitor log file in real-time with watch and tail command

If you have an application that logs to a file and you want to monitor that log in real-time, type the following command

$ watch tail /path/to/log/file.log

The watch command will watch for any update in the output of the tail command and keep printing the latest tail of that log file to the terminal console.

If you want to change the number of lines to print out, change the command to

$ watch tail -n 15 /path/to/log/file.log

Remember to replace 15 with the number of lines that you want.

List all running processes with ps command

To list all running process in your system, type the following command

$ ps aux

To filter the list with only the processes that you are interested in, add a grep command

$ ps aux | grep abc | grep xyz

To count how many processes are running, add a -c argument

$ ps aux | grep abc | grep xyz -c

List all connections with netstat command

To list all connections in your system, type the following command

$ netstat -naltp

To filter the result, add a grep command. For example, to filter all http connection (port 80) that is currently open (ESTABLISHED), type the following command

$ netstat -naltp | grep :80 | grep ESTABLISHED

To count how many results matches the filter, add -c argument

$ netstat -naltp | grep :80 | grep ESTABLISHED -c
How to use nohup to execute commands in the background and keep it running after you exit from a shell promt

How to use nohup to execute commands in the background and keep it running after you exit from a shell promt

Most of the time you login into remote server via ssh. If you start a shell script or command and you exit (abort remote connection), the process/command will get killed. Sometimes a job or command takes a long time. If you are not sure when the job will finish, then it is better to leave job running in background. But, if you log out of the system, the job will be stopped and terminated by your shell. What do you do to keep job running in the background when process gets SIGHUP?

nohup command to the rescue

In these situations, we can use nohup command line-utility which allows to run command/process or shell script that can continue running in the background after we log out from a shell.

nohup is a POSIX command to ignore the HUP (hangup) signal. The HUP signal is, by convention, the way a terminal warns dependent processes of logout.

Output that would normally go to the terminal goes to a file called nohup.out if it has not already been redirected.

nohup command syntax:

The syntax is as follows

$ nohup command-name &
$ exit

or

$ nohup /path/to/command-name arg1 arg2 > myoutput.log &
$ exit

Where,

  • command-name: is name of shell script or command name. You can pass argument to command or a shell script.
  • &: nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.
  • exit: after nohup and & start the command in the background, you will still have to type exit or CTRL-D to go back to bash cursor.

Use jobs -l command to list all jobs:

$ jobs -l

nohup command examples

First, login to remote server using ssh command:

$ ssh user@remote.server.com

I am going to execute a shell script called pullftp.sh:

$ nohup pullftp.sh &

Type exit or press CTRL + D to exit from remote server:

> exit

In this example, I am going to find all programs and scripts with setuid bit set on, enter:

$ nohup find / -xdev -type f -perm +u=s -print > out.txt &

Type exit or press CTRL + D to exit from remote server. The job still keeps on running after you exit.

> exit

nohup is often used in combination with the nice command to run processes on a lower priority.

Please note that nohup does not change the scheduling priority of COMMAND; use nice command for that purpose. For example:

$ nohup nice -n -5 ls / > out.txt &

As you can see nohup keep processes running after you exit from a shell. Read man page of nohup(1) and nice(1) for more information. Please note that nohup is almost available on Solaris/BSD/Linux/UNIX variants.

What’s the difference between nohup and ampersand (&)

Both nohup myprocess.out & or myprocess.out & set myprocess.out to run in the background. After I shutdown the terminal, the process is still running. What’s the difference between them?

nohup catches the hangup signal (see man 7 signal) while the ampersand doesn’t (except the shell is confgured that way or doesn’t send SIGHUP at all).

Normally, when running a command using & and exiting the shell afterwards, the shell will terminate the sub-command with the hangup signal (kill -SIGHUP <pid>). This can be prevented using nohup, as it catches the signal and ignores it so that it never reaches the actual application.

In case you’re using bash, you can use the command shopt | grep hupon to find out whether your shell sends SIGHUP to its child processes or not. If it is off, processes won’t be terminated, as it seems to be the case for you. More information on how bash terminates applications can be found here.

There are cases where nohup does not work, for example when the process you start reconnects the SIGHUP signal, as it is the case here.

Advanced topics

Existing jobs, processes

Some shells (e.g. bash) provide a shell builtin that may be used to prevent SIGHUP being sent or propagated to existing jobs, even if they were not started with nohup. In bash, this can be obtained by using disown -h job; using the same builtin without arguments removes the job from the job table, which also implies that the job will not receive the signal. Before using disown on an active job, it should be stopped by Ctrl-Z, and continued in the background by the bg command.[2] Another relevant bash option is shopt huponexit, which automatically sends the HUP signal to jobs when the shell is exiting normally.[3]

The AIX and Solaris versions of nohup have a -p option that modifies a running process to ignore future SIGHUP signals. Unlike the above-described disown builtin of bash, nohup -p accepts process IDs.[4]

Overcoming hanging

Note that nohupping backgrounded jobs is typically used to avoid terminating them when logging off from a remote SSH session. A different issue that often arises in this situation is that ssh is refusing to log off (“hangs”), since it refuses to lose any data from/to the background job(s).[5][6] This problem can also be overcome by redirecting all three I/O streams:

$ nohup ./myprogram > foo.out 2> foo.err < /dev/null &

Also note that a closing SSH session does not always send a HUP signal to depending processes. Among others, this depends on whether a pseudo-terminal was allocated or not.[7]

Alternatives

Terminal multiplexer

A terminal multiplexer can run a command in a separate session, detached from the current terminal, which means that if the current session ends, the detached session and its associated processes keeps running. One can then reattach to the session later on.

For example, the following invocation of screen will run somescript.sh in the background of a detached session:

screen -A -m -d -S somename ./somescript.sh &

disown command

The disown command is used to remove jobs from the job table, or to mark jobs so that a SIGHUP signal is not sent on session termination.

Here is how you can try it out:

$ pullftp.sh &
$ disown -h
$ exit

From the bash bash(1) man page:

By default, removes each JOBSPEC argument from the table of active jobs. If the -h option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. The -a option, when JOBSPEC is not supplied, means to remove all jobs from the job table; the -r option means to remove only running jobs.

at command

You can use at command to queue a job for later execution. For example, you can run pullftp.sh script to queue (one minute) later execution:

$ echo "pullftp.sh" | at now + 1 minute
Slowloris DoS Attack and Mitigation on NGINX Web Server

Slowloris DoS Attack and Mitigation on NGINX Web Server

1. Introduction

Slowloris DoS Attack gives a hacker the power to take down a web server in less than 5 minutes by just using a moderate personal laptop. The whole idea behind this attack technique is making use of HTTP GET requests to occupy all available HTTP connections permitted on a web server.

Technically, NGINX is not affected by this attack since NGINX doesn’t rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. However, we can see later in this article that in practice, the default configurations can make an NGINX web server “vulnerable” to Slowloris.

In this article, we are going to take a look at this attack technique and some way to mitigate this attack on NGINX.

2. Slowloris DoS Attack

From acunetix:

A Slow HTTP Denial of Service (DoS) attack, otherwise referred to as Slowloris HTTP DoS attack, makes use of HTTP GET requests to occupy all available HTTP connections permitted on a web server.

A Slow HTTP DoS Attack takes advantage of a vulnerability in thread-based web servers which wait for entire HTTP headers to be received before releasing the connection. While some thread-based servers such as Apache make use of a timeout to wait for incomplete HTTP requests, the timeout, which is set to 300 seconds by default, is re-set as soon as the client sends additional data.

This creates a situation where a malicious user could open several connections on a server by initiating an HTTP request but does not close it. By keeping the HTTP request open and feeding the server bogus data before the timeout is reached, the HTTP connection will remain open until the attacker closes it. Naturally, if an attacker had to occupy all available HTTP connections on a web server, legitimate users would not be able to have their HTTP requests processed by the server, thus experiencing a denial of service.

This enables an attacker to restrict access to a specific server with very low utilization of bandwidth. This breed of DoS attack is starkly different from other DoS attacks such as SYN flood attacks which misuse the TCP SYN (synchronization) segment during a TCP three-way-handshake.

How it works

An analysis of an HTTP GET request helps further explain how and why a Slow HTTP DoS attack is possible. A complete HTTP GET request resembles the following.

GET /index.php HTTP/1.1[CRLF]
Pragma: no-cache[CRLF]
Cache-Control: no-cache[CRLF]
Host: testphp.vulnweb.com[CRLF]
Connection: Keep-alive[CRLF]
Accept-Encoding: gzip,deflate[CRLF]
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.63 Safari/537.36[CRLF]
Accept: */*[CRLF][CRLF]

Something that is of particular interest is the [CRLF] in the GET request above. Carriage Return Line Feed (CRLF), is a non-printable character that is used to denote the end of a line. Similar to text editors, an HTTP request would contain a [CRLF] at the end of a line to start a fresh line and two [CRLF] characters (i.e. [CRLF][CRLF]) to denote a blank line. The HTTP protocol defines a blank line as the completion of a header. A Slow HTTP DoS takes advantage of this by not sending a finishing blank line to complete the HTTP header.

To make matters worse, a Slow HTTP DoS attack is not commonly detected by Intrusion Detection Systems (IDS) since the attack does not contain any malformed requests. The HTTP request will seem legitimate to the IDS and will pass it onto the web server.

Perform a Slowloris DoS Attack

Performing a Slowloris DoS Attack is a piece of cake nowadays. We can easily find a lot of implementations of the attack hosted on GitHub with a simple Google search.

For demonstration, we can use a Python implementation of Slowloris to perform an attack.

How this code works

This implementation works like this:

  1. We start making lots of HTTP requests.
  2. We send headers periodically (every ~15 seconds) to keep the connections open.
  3. We never close the connection unless the server does so. If the server closes a connection, we create a new one keep doing the same thing.

This exhausts the servers thread pool and the server can’t accept requests from other people.

How to install and run this code

You can clone the git repo or install using pip. Here’s how we run it.

$ sudo pip3 install slowloris
$ slowloris example.com

That’s it. We are performing a Slowloris attack on example.com!

If you want to clone using git instead of pip, here’s how you do it.

git clone https://github.com/gkbrk/slowloris.git
cd slowloris
python3 slowloris.py example.com

By default, slowloris.py will try to keep 150 open connections to target web server and you can change this number by adding command line argument “-s 1000”.

However, because this code sends keep-alive messages for one connection after another (not in parallel), some connections will get timed out and disconnected by the target server before its turn of sending keep-alive message. The result is that, in practice, an instance of slowloris.py can only keep about 50-100 open connections to the target.

We can work around this limitation by opening multiple instances of slowloris.py, with each trying to keep 50 open connections. That way, we can keep thousands of connections open with only one computer.

3. Preventing and Mitigating Slowloris (a.k.a. Slow HTTP) DoS Attacks in NGINX

Slowloris works by opening a lot of connections to the target web server, and keeping those connections open by periodically sending keep-alive messages on each connections to the server. Understanding this, we can come up with several ways to mitigate the attack.

Technically, Slowloris only affects thread-based web servers such as Apache, while leaving event-driven (asynchronous) web servers like NGINX unaffected. However, the default configurations of NGINX can make NGINX vulnerable to this attack.

Identify the attack

First of all, to see if our web server are under attack, we can list connections to port 80 on our web server by running the following command:

$ netstat -nalt | grep :80

The output will look like this:

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 10.128.0.2:48406        169.254.169.254:80      ESTABLISHED
tcp        0      0 10.128.0.2:48410        169.254.169.254:80      ESTABLISHED
tcp        0      0 10.128.0.2:48396        169.254.169.254:80      CLOSE_WAIT
tcp        0      0 10.128.0.2:48398        169.254.169.254:80      CLOSE_WAIT
tcp        0      0 10.128.0.2:48408        169.254.169.254:80      ESTABLISHED
tcp        0      0 10.128.0.2:48400        169.254.169.254:80      CLOSE_WAIT

We can further filter only the open connections by running the following command:

$ netstat -nalt | grep :80 | grep ESTA

The output will look like:

tcp        0      0 10.128.0.2:48406        169.254.169.254:80      ESTABLISHED
tcp        0      0 10.128.0.2:48410        169.254.169.254:80      ESTABLISHED
tcp        0      0 10.128.0.2:48408        169.254.169.254:80      ESTABLISHED

We can also count the open connections by adding -c to the end of the above command:

$ netstat -nalt | grep :80 | grep ESTA -c

The output will be the number of connections in the filtered result:

3

How is NGINX vulnerable to Slowloris?

NGINX can be vulnerable to Slowloris in the several ways:

  • Config #1: By default, NGINX limits the number of connections accepted by each worker process to 768.
  • Config #2: Default number of open connections limited by the system is too low.
  • Config #3: Default number of open connections limited for nginx user (usually www-data) is too low.
  • Config #4: By default, NGINX itself limits the number of open connections for its worker process no more than 1024.

For example, let’s say we run NGINX on a 2-core CPU server, with default configurations.

  • Config #1: NGINX will run with 2 worker process, which can handle up to 768 x 2 = 1536 connections.
  • Config #2: Default number of open connections limited by the system: soft limit = 1024, hard limit = 4096.
  • Config #3: Default number of open connections limited for nginx user (usually www-data): soft limit = 1024, hard limit = 4096.
  • Config #4: By default, NGINX itself limits the number of open connections for its worker process no more than 1024.

Therefore NGINX can handle at most 1024 connections. That will take only 20 instances of slowloris.py to take the server down. Wow!

Mitigation

We can mitigate the attack in some network related approach like:

  • Limiting the number of connections from one IP
  • Lower the timed out wait time for each http connection

However, by using proxies (such as TOR network) to make connections appear to be from different IPs, the attacker can easily by pass these network defense approaches. After that, 1024 connections is something that is pretty easy to achieve.

Therefore, to protect NGINX from this type of attack, we should optimize the default configurations mentioned above.

Config #1: NGINX worker connections limit

Open nginx configuration file, which usually located at /etc/nginx/nginx.conf, and change this setting:

events {
    worker_connections 768;
}

to something larger like this:

events {
    worker_connections 100000;
}

This settings will tell NGINX to allow each of its worker process to handle up to 100k connections.

Config #2: system open file limit

Even we told NGINX to allow each of its worker process to handle up to 100k connections, the number of connections may be further limited by the system open file limit.

To check the current system file limit:

$ cat /proc/sys/fs/file-max

Normally this number would be 10% of the system’s memory, i.e. if our system has 2GB of RAM, this number will be 200k, which should be enough. However, if this number is too small, we can increase it. Open /etc/sysctl.conf, change the following line to (or add the line if it’s not already there)

fs.file-max = 500000

Apply the setting:

$ sudo sysctl -p

Check the setting again:

$ cat /proc/sys/fs/file-max

The output should be:

fs.file-max = 500000

Config #3: user’s open file limit

Besides system-wide open file limit as mentioned in Config #2, Linux system also limit the number of open file per user. By default, NGINX worker processes will run as www-data user or nginx user, and is therefore limited by this number.

To check the current limit for nginx’s user (www-data in the example below), first we need to switch to www-data user:

$ sudo su - www-data -s /bin/bash

By default, www-data user is not provided with a shell, therefore to run commands as www-data, we must provide a shell with the -s argument and provide the /bin/bash as the shell.

After switching to www-data user, we can check the open file limit of that user:

$ ulimit -n

To check the hard limit:

$ ulimit -Hn

To check the soft limit:

$ ulimit -Sn

By default, the soft limit is 1024 and hard limit is 4096, which is too small to survive a Slowloris attack.

To increase this limit, open /etc/security/limits.conf and add the following lines (remember to switch back to a sudo user so that we can edit the file):

*                soft    nofile          102400
*                hard    nofile          409600
www-data         soft    nofile          102400
www-data         hard    nofile          409600

A note for RHEL/CentOS/Fedora/Scientific Linux users
For these systems, we will have to do an extra steps for the limit to take effect. Edit /etc/pam.d/login file and add/modify the following line (make sure you get pam_limts.so):

session required pam_limits.so

then save and close the file.

The user must be logout and re-login for the settings to take effect.

After that, run the above checks to see if the new soft limit and hard limit have been applied.

Config #4: NGINX’s worker number of open files limit

Even when the ulimit -n command for www-data returns 102400, the nginx worker process open file limit is still 1024.

To verify the limit applied to the running worker process, first we need to find the process id of the worker process by listing the running worker processes:

$ ps aux | grep nginx

root      1095  0.0  0.0  85880  1336 ?        Ss   18:30   0:00 nginx: master process /usr/sbin/nginx
www-data  1096  0.0  0.0  86224  1764 ?        S    18:30   0:00 nginx: worker process
www-data  1097  0.0  0.0  86224  1764 ?        S    18:30   0:00 nginx: worker process
www-data  1098  0.0  0.0  86224  1764 ?        S    18:30   0:00 nginx: worker process
www-data  1099  0.0  0.0  86224  1764 ?        S    18:30   0:00 nginx: worker process

then take one of the process ids (e.g. 1096 in the above example output), and then check the limit currently applied to the process (remember to change 1096 to the right process id in your server):

$ cat /proc/1096/limits

Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15937                15937                processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15937                15937                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

You can see that the max open files is still 1024:

Max open files            1024                 4096                 files

That is because NGINX itself also limits the number of open files by default to 1024.

To change this, open NGINX configuration file (/etc/nginx/nginx.conf) and add/edit the following line:

worker_rlimit_nofile 102400;

Make sure that this line is put at the top level configurations and not nested in the events configuration like worker_connections.

The final nginx.conf would look something like this:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 102400;

events {
    worker_connections 100000;
}
...

Restart NGINX, then verify the limit applied to the running worker process again. The Max open files should now change to 102400.

Congratulations! Now your NGINX server can survive a Slowloris attack. You can argue that the attacker can still make more than 100k open connections to take down the target web server, but that would become more of any DDoS attack than Slowloris attack specifically.

Conclusions

Slowloris is a very smart way that allows an attacker to use very limited resources to perform a DoS attack on a web server. Technically, NGINX servers are not vulnerable to this attack, but the default configurations make NGINX vulnerable. By optimizing the default configurations on NGINX server, we can mitigate the attack.

If you haven’t check your NGINX server for this type of attack, you should check it now, because tomorrow, who knows if a curious high school kid would perform a Slowloris attack (“for educational purposes”) from his laptop and take down your server. 😀

How to Set Up a shared folder using Samba on Ubuntu with auto mount

How to Set Up a shared folder using Samba on Ubuntu with auto mount

Introduction

One of the most popular questions long-time Linux users have been asked is, “How do I create a network share that Windows can see?

The best solution for sharing Linux folders across a network is a piece of software that’s about as old as Linux itself: Samba.

In this guide, we’ll cover how to configure Samba mounts on an Ubuntu 14.04 server.

Prerequisites

For the purposes of this guide, we are going to refer to the server that is going to be sharing its directories the host and the server that will mount these directories as the client.

In order to keep these straight throughout the guide, I will be using the following IP addresses as stand-ins for the host and server values:

  • Host: 1.2.3.4
  • Client: 111.111.111.111

You should substitute the values above with your own host and client values.

Download and Install the Components

Samba is a popular Linux tool, and chances are you already have it installed. To see if that’s the case, open up a terminal and test to see if its configuration folder exists:

host

$ ls -l /etc/samba

If Samba hasn’t been installed, let’s install it first.

host

$ sudo apt-get update
$ sudo apt-get install samba

Configuring network share on host

Set a password for your user in Samba

host

$ sudo smbpasswd -a <user_name>

Note: Samba uses a separate set of passwords than the standard Linux system accounts (stored in /etc/samba/smbpasswd), so you’ll need to create a Samba password for yourself. This tutorial implies that you will use your own user and it does not cover situations involving other users passwords, groups, etc…

Tip1: Use the password for your own user to facilitate.

Tip2: Remember that your user must have permission to write and edit the folder you want to share. Eg.:

$ sudo chown <user_name> /var/opt/blah/blahblah
$ sudo chown :<user_name> /var/opt/blah/blahblah

Tip3: If you’re using another user than your own, it needs to exist in your system beforehand, you can create it without a shell access using the following command:

$ sudo useradd USERNAME --shell /bin/false

You can also hide the user on the login screen by adjusting lightdm’s configuration, in /etc/lightdm/users.conf add the newly created user to the line :

hidden-users=

Create a directory to be shared

host

$ mkdir /home/<user_name>/<folder_name>

After Samba is installed, a default configuration file called smb.conf.default can be found in /etc/samba. This file needs to be copied to the same folder with the name of smb.conf, but before doing this, it’d be worth running the same ls -l /etc/samba command as above to see if your distro has that file there already. If it doesn’t exist, it’s as simple as entering sudo (or sudo -s to retain escalated privileges for the time-being, or su for systems without sudo) and making use of the default file:

host

$ cp /etc/samba/smb.conf.default /etc/samba/smb.conf

Make a safe backup copy of the original smb.conf file to your home folder, in case you make an error

host

$ sudo cp /etc/samba/smb.conf ~

Edit file /etc/samba/smb.conf

host

$ sudo nano /etc/samba/smb.conf

Once smb.conf has loaded, add this to the very end of the file:

host

[<folder_name>]
path = /home/<user_name>/<folder_name>
valid users = <user_name>
read only = no

Tip: There Should be in the spaces between the lines, and note que also there should be a single space both before and after each of the equal signs.

The [Share Name] is the name of the folder that will be viewable after entering the network hostname (eg: \\LINUXPC\Share Name). The path will be the Linux folder that will be accessible after entering this share. As for the options, there are many. As I mentioned above, the smb.conf file itself contains a number of examples; for all others, there’s a huge page over at the official website to take care of the rest. Let’s cover a couple of the more common ones, though.

guest ok = yes
— Guest accounts are OK to use the share; aka: no passwords.
guest only = yes
Only guests may use the share.
writable = yes
— The share will allow files to be written to it.
read only = yes
— Files cannot be written to the share, just read.
force user = username
— Act as this user when accessing the share, even if a different user/pass is provided.
force group = groupname
— Act as this usergroup when accessing the share. username = username, username2, @groupname
— If the password matches one of these users, the share can be accessed.
valid users = username, username2, @groupname
— Like above, but requires users to enter their username.

Here are a couple of example shares I use:

The force user and force group options are not go-to options, so I’d recommend trying to create a share without them first. In some cases, permission issues will prevent you from writing to certain folders, a situation I found myself in where my NAS mounts and desktop folder were concerned. If worse comes to worse, simply add these force options and retest.

Each time the smb.conf file is edited, Samba should be restarted to reflect the changes. On most distros, running this command as sudo (or su) should take care of it:

host

$ /etc/init.d/samba restart

For Ubuntu-based distros, the service command might need to be used. As sudo:

host

$ sudo service smbd restart

If neither of these commands work, refer to your distro’s documentation.

Once Samba has restarted, use this command to check your smb.conf for any syntax errors

host

$ testparm

With Samba all configured, let’s connect to our shares!

Connect to Samba Network Share from clients

Once a Samba share is created, it can be accessed through applications that support the SMB protocol, as one would expect. While Samba’s big focus is Windows, it’s not the only piece of software that can take advantage of it. Many file managers for mobile platforms are designed to support the protocol, and other Linux PCs can mount them as well.

To access your network share from your desktop client, use your username (<user_name>) and password through the path smb://<HOST_IP_OR_NAME>/<folder_name>/ (Linux users) or \\<HOST_IP_OR_NAME>\<folder_name>\ (Windows users). Note that <folder_name> value is passed in [<folder_name>], in other words, the share name you entered in /etc/samba/smb.conf.

We’ll take a more detailed look at a couple of different solutions for accessing our Samba shares.

Connect to Samba Network Share from Windows client

First things first: The Microsoft side. In Windows, browsing to “Network” should reveal your Linux PC’s hostname; in my particular case, it’s TGGENTOO. After double-clicking on it, the entire list of shares will be revealed.

Network shares are all fine and good, but it’s a hassle that to get to one, it requires a bunch of clicking and waiting. Depending on Windows’ mood, the Linux hostname might not even appear on the network. To ease the pain of having to go through all that again, mounting a share from within Windows is a great option. For simple needs, a desktop shortcut might provide quick enough access, but shortcuts by design are quite limited. When mounting a network share as a Windows network drive, however, it acts just as a normal hard drive does.

Mounting a Samba share as a network drive is simple:

client

$ net use X: \\HOSTNAME\Share
$ net use X: “\\HOSTNAME\Share Name”

Quotation marks surrounding the entire share is required if there’s a space in its name.

A real-world example:

Connect to Samba Network Share from Linux client

Install smbclient

client

$ sudo apt-get install smbclient

List all shares:

client

$ smbclient -L //<HOST_IP_OR_NAME>/<folder_name> -U <user>

Connect:

client

$ smbclient //<HOST_IP_OR_NAME>/<folder_name> -U <user>

Mounting a Samba Network Share from Linux client

You’ll need the cifs-utils package in order to mount SMB shares:

client

$ sudo apt-get install cifs-utils

After that, just make a directory…

client

$ mkdir ~/Desktop/Windows-Share

… and mount the share to it.

client

$ mount -t cifs -o username=username //HOSTNAME/Share /mnt/location

For example

client

$ sudo mount.cifs //WindowsPC/Share /home/geek/Desktop/Windows-Share -o user=geek

Using this command will provide a prompt to enter a password; this can be avoided by modifying the command to username=username,password=password (not great for security reasons).

In case you need help understanding the mount command, here’s a breakdown:

$ sudo mount.cifs – This is just the mount command, set to mount a CIFS (SMB) share.

WindowsPC – This is the name of the Windows computer.  Type “This PC” into the Start menu on Windows, right click it, and go to Properties to see your computer name.

//Windows-PC/Share – This is the full path to the shared folder.

/home/geek/Desktop/Windows-Share – This is where we’d like the share to be mounted.

-o user=geek – This is the Windows username that we are using to access the shared folder.

I recommend mounting the network share this way before inputting it into /etc/fstab, just to make sure all is well. Afterwards, your system’s fstab file can be edited as $ sudo (or su) to automatically mount a share at boot.

Adding values for a network share to fstab isn’t complicated, but making sure they’re the correct values can be. Depending on the network, the required hostname might have to be the text value (TGGENTOO), or an IP address (192.168.1.100). Some trial and error can be expected, and at worst, a trip to your distro’s website might have to be taken.

Here are two examples of what could go into an fstab:

//192.168.1.100/Storage /mnt/nas-storage cifs username=guest,uid=1000,iocharset=utf8 0 0
//TGGENTOO/Storage /mnt/nasstore cifs username=guest,uid=1000,iocharset=utf8 0 0

If a given configuration proves problematic, make sure that the Samba share can be mounted outside of fstab, with the mount command (as seen above). Once a share is properly mounted that way, it should be a breeze translating it into fstab speak.

Conclusion

Samba provides a quick and easy way to share files and folders between a Windows and Linux machines.

If you only want a network share between Linux machines only, you can also Set Up a Network Shared using NFS.

If you want a distributed network share with high availability and automatic replication, you can Set Up a Network Shared using GlusterFS.

Samba is said to provide the best performance for smaller files, but I haven’t done a benchmark myself so I can’t guarantee anything.

Hope you find this post helpful and get your network share running. Until next time!

How To Set Up a shared folder using NFS on Ubuntu 14.04 with auto mount

How To Set Up a shared folder using NFS on Ubuntu 14.04 with auto mount

Introduction

NFS, or Network File System, is a distributed filesystem protocol that allows you to mount remote directories on your server. This allows you to leverage storage space in a different location and to write to the same space from multiple servers easily. NFS works well for directories that will have to be accessed regularly.

In this guide, we’ll cover how to configure NFS mounts on an Ubuntu 14.04 server.

Prerequisites

In this guide, we will be configuring directory sharing between two Ubuntu 14.04 servers. These can be of any size. For each of these servers, you will have to have an account set up with $ sudo privileges. You can learn how to configure such an account by following steps 1-4 in our initial setup guide for Ubuntu 14.04 servers.

For the purposes of this guide, we are going to refer to the server that is going to be sharing its directories the host and the server that will mount these directories as the client.

In order to keep these straight throughout the guide, I will be using the following IP addresses as stand-ins for the host and server values:

  • Host: 1.2.3.4
  • Client: 111.111.111.111

You should substitute the values above with your own host and client values.

Download and Install the Components

Before we can begin, we need to install the necessary components on both our host and client servers.

On the host server, we need to install the nfs-kernel-server package, which will allow us to share our directories. Since this is the first operation that we’re performing with apt in this session, we’ll refresh our local package index before the installation:

host

$ sudo apt-get update
$ sudo apt-get install nfs-kernel-server

Once these packages are installed, you can switch over to the client computer.

On the client computer, we’re going to have to install a package called nfs-common, which provides NFS functionality without having to include the server components. Again, we will refresh the local package index prior to installation to ensure that we have up-to-date information:

client

$ sudo apt-get update
$ sudo apt-get install nfs-common

Create the Share Directory on the Host Server

We’re going to experiment with sharing two separate directories during this guide. The first directory we’re going to share is the /home directory that contains user data.

The second is a general purpose directory that we’re going to create specifically for NFS so that we can demonstrate the proper procedures and settings. This will be located at /var/nfs.

Since the /home directory already exists, go ahead and start by creating the /var/nfs directory:

host

$ sudo mkdir /var/nfs

Now, we have a new directory designated specifically for sharing with remote hosts. However, the directory ownership is not ideal. We should give the user ownership to a user on our system namednobody. We should give the group ownership to a group on our system named nogroup as well.

We can do that by typing this command:

host

$ sudo chown nobody:nogroup /var/nfs

We only need to change the ownership on our directories that are used specifically for sharing. We wouldn’t want to change the ownership of our /home directory, for instance, because it would cause a great number of problems for any users we have on our host server.

Configure the NFS Exports on the Host Server

Now that we have our directories created and assigned, we can dive into the NFS configuration file to set up the sharing of these resources.

Open the /etc/exports file in your text editor with root privileges:

host

$ sudo nano /etc/exports

The files that you see will have some comments that will show you the general structure of each configuration line. Basically, the syntax is something like:

directory_to_share     client(share_option1, ... , share_optionN)

So we want to create a line for each of the directories that we wish to share. Since in this example or client has an IP of 111.111.111.111, our lines will look like this:

host

/home     111.111.111.111(rw,sync,no_root_squash,no_subtree_check)
/var/nfs  111.111.111.111(rw,sync,no_subtree_check)

We’ve explained everything here but the specific options we’ve enabled. Let’s go over those now.

  • rw: This option gives the client computer both read and write access to the volume.
  • sync: This option forces NFS to write changes to disk before replying. This results in a more stable and consistent environment, since the reply reflects the actual state of the remote volume.
  • no_subtree_check: This option prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
  • no_root_squash: By default, NFS translates requests from a root user remotely into a non-privileged user on the server. This was supposed to be a security feature by not allowing a root account on the client to use the filesystem of the host as root. This directive disables this for certain shares.

When you finish making your changes, save and close the file.

Next, you should create the NFS table that holds the exports of your shares by typing:

host

$ sudo exportfs -a

However, the NFS service is not actually running yet. You can start it by typing:

host

$ sudo service nfs-kernel-server start

This will make your shares available to the clients that you configured.

Create the Mount Points and Mount Remote Shares on the Client Server

Now that your host server is configured and making its directory shares available, we need to prep our client.

We’re going to have to mount the remote shares, so let’s create some mount points. We’ll use the traditional /mnt as a starting point and create a directory called nfs under it to keep our shares consolidated.

The actual directories will correspond with their location on the host server. We can create each directory, and the necessary parent directories, by typing this:

client

$ sudo mkdir -p /mnt/nfs/home
$ sudo mkdir -p /mnt/nfs/var/nfs

Now that we have some place to put our remote shares, we can mount them by addressing our host server, which in this guide is 1.2.3.4, like this:

client

$ sudo mount 1.2.3.4:/home /mnt/nfs/home
$ sudo mount 1.2.3.4:/var/nfs /mnt/nfs/var/nfs

These should mount the shares from our host computer onto our client machine. We can double check this by looking at the available disk space on our client server:

client

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda               59G  1.3G   55G   3% /
none                  4.0K     0  4.0K   0% /sys/fs/cgroup
udev                  2.0G   12K  2.0G   1% /dev
tmpfs                 396M  324K  396M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  2.0G     0  2.0G   0% /run/shm
none                  100M     0  100M   0% /run/user
1.2.3.4:/home          59G  1.3G   55G   3% /mnt/nfs/home

As you can see at the bottom, only one of our shares has shown up. This is because both of the shares that we exported are on the same filesystem on the remote server, meaning that they share the same pool of storage. In order for the Avail and Use% columns to remain accurate, only one share may be added into the calculations.

If you want to see all of the NFS shares that you have mounted, you can type:

client

$ mount -t nfs
1.2.3.4:/home on /mnt/nfs/home type nfs (rw,vers=4,addr=1.2.3.4,clientaddr=111.111.111.111)
1.2.3.4:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,vers=4,addr=1.2.3.4,clientaddr=111.111.111.111)

This will show all of the NFS mounts that are currently accessible on your client machine.

Test NFS Access

You can test the access to your shares by writing something to your shares. You can write a test file to one of your shares like this:

client

$ sudo touch /mnt/nfs/home/test_home

Let’s write test file to the other share as well to demonstrate an important difference:

client

$ sudo touch /mnt/nfs/var/nfs/test_var_nfs

Look at the ownership of the file in the mounted home directory:

client

$ ls -l /mnt/nfs/home/test_home
-rw-r--r-- 1 root   root      0 Apr 30 14:43 test_home

As you can see, the file is owned by root. This is because we disabled the root_squash option on this mount that would have written the file as an anonymous, non-root user.

On our other test file, which was mounted with the root_squash enabled, we will see something different:

client

$ ls -l /mnt/nfs/var/nfs/test_var_nfs
-rw-r--r-- 1 nobody nogroup 0 Apr 30 14:44 test_var_nfs

As you can see, this file was assigned to the “nobody” user and the “nogroup” group. This follows our configuration.

Make Remote NFS Directory Mounting Automatic

We can make the mounting of our remote NFS shares automatic by adding it to our fstab file on the client.

Open this file with root privileges in your text editor:

client

$ sudo nano /etc/fstab

At the bottom of the file, we’re going to add a line for each of our shares. They will look like this:

client

1.2.3.4:/home    /mnt/nfs/home   nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0
1.2.3.4:/var/nfs    /mnt/nfs/var/nfs   nfs auto,noatime,nolock,bg,nfsvers=4,sec=krb5p,intr,tcp,actimeo=1800 0 0

The options that we are specifying here can be found in the man page that describes NFS mounting in thefstab file:

client

$ man nfs

This will automatically mount the remote partitions at boot (it may take a few moments for the connection to be made and the shares to be available).

Unmount an NFS Remote Share

If you no longer want the remote directory to be mounted on your system, you can unmount it easily by moving out of the share’s directory structure and unmounting, like this:

client

$ cd ~
$ sudo umount /mnt/nfs/home
$ sudo umount /mnt/nfs/var/nfs

This will remove the remote shares, leaving only your local storage accessible:

client

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         59G  1.3G   55G   3% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            2.0G   12K  2.0G   1% /dev
tmpfs           396M  320K  396M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            2.0G     0  2.0G   0% /run/shm
none            100M     0  100M   0% /run/user

As you can see, our NFS shares are no longer available as storage space.

Conclusion

NFS provides a quick and easy way to access remote systems over a network. However, the protocol itself is not encrypted. If you are using this in a production environment, consider routing NFS over SSH or a VPN connection to create a more secure experience.

How to add custom css to WordPress

How to add custom css to WordPress

While managing your WordPress, you may sometimes want to edit the styling of some parts of your post or your site. Moreover, you want to edit the CSS or add custom CSS without modifying the theme’s source code. Let’s look at some ways to do that.

Use JetPack’s Custom CSS Editor

If you already have JetPack by WordPress.com plugin installed, go to your JetPack plugin settings -> Appearance -> Custom CSS -> Configure your Custom CSS settings, click to open the Custom CSS Editor, then append your custom CSS there and click save.

The custom css will be appended to all the pages in your WordPress website.

Use WP Add Custom CSS plugin

WP Add Custom CSS plugin not only lets you add custom CSS to your whole website, but also to any specific posts or pages that you want.

Once the plugin is installed, it will give you an extra edit box in the post editor and page editor to add your custom css.

Add custom CSS to the whole website.
Add custom CSS to specific posts.
Add custom CSS to specific pages.
Add custom CSS to specific custom post types (such as Woocommerce products).

Use Simple Custom CSS and JS plugin

Simple Custom CSS and JS plugin not only lets you add custom CSS, but also custom JavaScript to your website.

At the time this post is written, this plugin hasn’t let us add custom CSS and Javascript to specific pages or posts. However, it let us manage multiple custom CSS and JavaScript without having to put all of them in the same spot as with the plugins mentioned above.

Manage Custom Codes
Add/Edit CSS
Add/Edit Javascript

Final words

These plugins let us modify the CSS without touching the theme’s source code.

Which of these options suits you best? Let me know in the comment! 😀

How to add custom slots to your WordPress theme

How to add custom slots to your WordPress theme

This tutorial will show you how to add custom slots to your WordPress theme so that you can drag and drop widgets to a location that is not yet defined by your theme.

Custom slots… what?

By custom slots I mean the pre-defined locations on your theme where you can drag and drop widgets to make widgets appears on your website. As you can see in the screenshot below, the custom slots are the elements wrapped in the red square.

The official term for custom slots in WordPress is sidebar. In the early days of WordPress, we can only drag widgets to the side columns of the website, hence the term sidebar. Then, as WordPress grows, the original so-called sidebar can be used everywhere on the website: on the top, in the footer, in between posts, etc. However, the name stays unchanged and the WordPress developers still use that term to refer to the custom slots.

In this tutorial, just to clear the confusion, I will use the term custom slots.

And to make this tutorial more understandable, I will use the Hemingway theme.

After this tutorial, I will have an extra slot as below:

The widgets dragged to that new slot will appear on top of the post column of my Hemingway theme.

Step by step tutorial

Open your theme editor

Do add the extra slot to your theme, first you have to open your theme editor.

On your wp-admin side menu, click on Appearance > Editor. Your theme editor will open.

Select the theme you want to edit.

Register your widget slot

Find the code where your theme already register its slots. Normally it would be in the function.php file.

After the you found the code, copy and insert another block of code for your new custom slot.

As you can see in the screenshot, the theme registers its default slots by adding an action to widgets_init.

add_action( 'widgets_init', 'hemingway_sidebar_reg' );

You can also see that I have added my new custom slot to the hemingway_sidebar_reg function (the block of code in red square).

register_sidebar(array(
    'name' => __( 'Content Column Top', 'hemingway' ),
    'id' => 'content-column-top',
    'description' => __( 'Widgets in this area will be shown on top of content column.', 'hemingway' ),
    'before_title' => '<h3 class="widget-title">',
    'after_title' => '</h3>',
    'before_widget' => '<div class="widget %2$s"><div class="widget-content">',
    'after_widget' => '</div><div class="clear"></div></div>'
));

The rest of the code in the hemingway_sidebar_reg function is default code that the theme use to register its original slots, which are Footer A, Footer B, Footer C, Sidebar.

Now if you go to your widgets drag and drop screen, the new extra slot should appear there for you to insert widgets.

For the demo purpose, I dragged a Search widget there.

Make the new slot appear on front end

Now that you can config your new slot, let’s move on to displaying it on the front end.

To do this, open the file that render the html where you want the new slot to appear, and make WordPress render whatever dragged to the new slot there using the function:

<?php dynamic_sidebar( 'content-column-top' ); ?>

To better understand this, look at the screenshot from my website

As you can see, I have inserted some code in the red square, telling WordPress to render the new slot content in between two layers of div.

Now if we go to our front end, the new slot content should appear there. However the styling may be a little bit messed up since we have added new elements to the website.

Adding new CSS to your theme

To make the website beautiful again, I added new CSS to the theme’s CSS file to format the new slot.

You can add custom CSS by editing your theme’s CSS file or by using a plugin.

In my case, I edited my theme’s CSS file as below:

Now the new custom slots appears correctly on my front page.

Final words

Adding a new custom slots (sidebars) to your theme may be a little bit different but you get the idea.

Hope this tutorial can help you customize your theme more conveniently.

Have better ideas? Share with me in the comment!