Month: July 2017

How to set up multiple Tor instances with Polipo in Windows

How to set up multiple Tor instances with Polipo in Windows

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

By default, Tor opens a SOCK proxy on port 9050 that allow internet traffic to go through to access Tor network.

In this post, we will show how to set up multiple Tor instances on the same Windows machine.

If you want to set up multiple Tor instances with Polipo on Linux, read this post instead.

Install Tor

Installing Tor on Windows is easy.

Go to Tor official download page to download Tor.

Normally, if you download Tor Browser package, you will have a Tor Browser that is already configured to use Tor as a proxy to access the internet.

In our case, we will set up multiple instances of Tor as multiple proxies, which means we need Tor only, so you can download the Expert Bundle.

After you download it, unzip it to wherever you like. The extracted folder should contains:

  • <parent folder>/Tor: contains executable files (exe and dlls)
  • <parent folder>/Data/Tor: the folder contains the data that Tor use to look up geo location of IPs.

If you run tor.exe in the /Tor folder, you can already start a Tor instance now.

Install Polipo

Tor proxy supports SOCKS protocol but does not support HTTP proxy protocol.

To help Tor function as a proxy protocol, we can use Polipo to make a tunnel where Polipo will open a HTTP proxy and transfer the packets between its HTTP proxy and Tor’s SOCKS proxy. That way, an application can leverage Tor’s power even if that application can only communicate through HTTP proxy.

You can download Polipo here.

After downloading Polipo, just extract the zip file. We only need that polipo.exe file.

Set up multiple instances folder structure

In order to run multiple instances, we must set up different port and data folder for each instance using command line arguments.

My set up goes like this

  • MultiTor\bin\Tor: Tor executable folder (contains tor.exe and dlls)
  • MultiTor\bin\Data: Tor geoip folder (extracted from downloaded Tor package)
  • MultiTor\bin\Polipo: Polipo executable folder (contains polipo.exe)
  • MultiTor\data\tor-10001: data folder of Tor instance listening on port 10001
  • MultiTor\data\tor-10002: data folder of Tor instance listening on port 10002

In the above set up, I will use the same executable file for all instances, but different data folders for each instance.

You can have you own set up, as long as the data folders are different among instances.

Start the first instance

Now, to start the first instance, I switch to MultiTor folder and run the following command

bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:10001 CONTROLPort 127.0.0.1:20001 DATADirectory data\tor-10001

The command above will start a Tor instance that has

  • 10001 as SOCKS proxy port
  • 20001 as CONTROL port
  • data\tor-10001 as data folder

We will also start a Polipo instance that tunnels through that Tor instance

bin\Polipo\polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1"

The command above will start a Polipo instance that has

  • 30001 as HTTP proxy port
  • talks to Tor proxy on port 10001

Start the next instances

To start the second instance, first we have to edit the command a little bit so that it will start the first instance in a new window, releasing the current command line cursor.

To do that, add start command at the beginning of the script

start bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:10001 CONTROLPort 127.0.0.1:20001 DATADirectory data\tor-10001
start bin\Polipo\polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1"

Now we can start the second instance following the same pattern

start bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:10002 CONTROLPort 127.0.0.1:20002 DATADirectory data\tor-10002
start bin\Polipo\polipo socksParentProxy="127.0.0.1:10002" proxyPort=30002 proxyAddress="127.0.0.1"

Note that the port numbers and the data folder have been changed for the second instance.

We can start as many instances as we want in this way.

Automate the task

To start a lot of instances, we can make a .bat file to automate the task as following

start_all_tor.bat

CD C:\Tools\MultiTor\
SETLOCAL
SETLOCAL ENABLEDELAYEDEXPANSION
FOR /L %%G IN (10001,1,10100) DO (
 SET /a sp=%%G+0
 SET /a cp=%%G+10000
 echo !sp!
 echo !cp!
 mkdir data\tor-!sp!
 start bin\Tor\tor.exe GeoIPFile bin\Data\Tor\geoip GeoIPv6File bin\Data\Tor\geoip6 SOCKSPort 127.0.0.1:!sp! CONTROLPort 127.0.0.1:!cp! DATADirectory data\tor-!sp!
)
ENDLOCAL

start_all_polipo.bat

CD C:\Tools\MultiTor\
SETLOCAL
SETLOCAL ENABLEDELAYEDEXPANSION
FOR /L %%G IN (10001,1,10100) DO (
 SET /a sp=%%G+0
 SET /a pp=%%G+20000
 echo !sp!
 echo !cp!
 start bin\Polipo\polipo socksParentProxy="127.0.0.1:!sp!" proxyPort=!pp! proxyAddress="127.0.0.1"
)
ENDLOCAL

The first batch script will start 100 Tor proxy instances that listen on port 10001-10100 and have control port from 20001-20100 with data folder from data\tor-10001 to data\tor-10100

The second batch script will start 100 Polipo proxy intances that listen on port 30001-30100 and talks to Tor proxy instances on port 10001-10100 correspondingly.

To stop all the instances, you can run a script like this

stop_all_tor.bat

taskkill /IM tor.exe /F

stop_all_polipo.bat

taskkill /IM polipo.exe /F
RabbitMQ installation guide on Linux Ubuntu 16.04

RabbitMQ installation guide on Linux Ubuntu 16.04

Introduction

With more than 35,000 production deployments of RabbitMQ world-wide at small startups and large enterprises, RabbitMQ is the most popular open source message broker.
RabbitMQ is lightweight and easy to deploy on premise and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.

In this guide, we’ll cover how to install and configure a 3-node rabbitMQ cluster, with HA proxy and management plugin.

Installing RabbitMQ on single node

Update our system’s default application toolset

$ sudo apt-get update
$ sudo apt-get -y upgrade

Enable RabbitMQ appication repository

$ echo "deb http://www.rabbitmq.com/debian/ testing main" >> /etc/apt/sources.list

Add the verification key for the package

$ curl http://www.rabbitmq.com/rabbitmq-signing-key-public.asc | sudo apt-key add -

Update the source with our new addition from above

$ sudo apt-get update

And finally, download and install RabbitMQ

$ sudo apt-get install rabbitmq-server

Start rabbitMQ server

$ sudo systemctl restart rabbitmq-server.service

Congratulation!! You’ve just successfully install your rabbitMQ server.

Configure multi-cluster RabbitMQ

In this part, we’ll go through how to setup a multi-node RabbitMQ cluster system (3 nodes in this example).

Sync the erlang cookie

RabbitMQ nodes and CLI tools (e.g. rabbitmqctl) use a cookie to determine whether they are allowed to communicate with each other. For two nodes to be able to communicate they must have the same shared secret called the Erlang cookie. The cookie is just a string of alphanumeric characters. It can be as long or short as you like. Every cluster node must have the same cookie.

Erlang VM will automatically create a random cookie file when the RabbitMQ server starts up. The easiest way to proceed is to allow one node to create the file, and then copy it to all the other nodes in the cluster.

On Unix systems, the cookie will be typically located in /var/lib/rabbitmq/.erlang.cookie or $HOME/.erlang.cookie.

When the cookie is misconfigured (for example, not identical), RabbitMQ will log errors such as “Connection attempt from disallowed node” and “Could not auto-cluster”.

Configure hostname and RabbitMQ Nodename

Configure Hostname

Edit hosts:

$ sudo vim /etc/hosts
xxx.xxx.xxx.xxx rabbit01
xxx.xxx.xxx.xxx rabbit02
xxx.xxx.xxx.xxx rabbit03

Configure RabbitMQ hostname

Edit rabbitmq-evn.conf:

rabbit01$ sudo vim /etc/rabbitmq/rabbitmq-env.conf
NODENAME=rabbit@rabbit01
rabbit02$ sudo vim /etc/rabbitmq/rabbitmq-env.conf
NODENAME=rabbit@rabbit02
rabbit03$ sudo vim /etc/rabbitmq/rabbitmq-env.conf
NODENAME=rabbit@rabbit03

Staring independent nodes

Clusters are set up by re-configuring existing RabbitMQ nodes into a cluster configuration. Hence the first step is to start RabbitMQ on all nodes in the normal way:

rabbit01$ rabbitmq-server -detached
rabbit02$ rabbitmq-server -detached
rabbit03$ rabbitmq-server -detached

This creates three independent RabbitMQ brokers, one on each node, as confirmed by the cluster_status command:

rabbit01$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit01 ...
[{nodes,[{disc,[rabbit@rabbit01]}]},{running_nodes,[rabbit@rabbit01]}]
...done.
rabbit02$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit02 ...
[{nodes,[{disc,[rabbit@rabbit02]}]},{running_nodes,[rabbit@rabbit02]}]
...done.
rabbit03$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit03 ...
[{nodes,[{disc,[rabbit@rabbit03]}]},{running_nodes,[rabbit@rabbit03]}]
...done.

The node name of a RabbitMQ broker started from the rabbitmq-server shell script is rabbit@shorthostname, where the short node name is lower-case (as in rabbit@rabbit01, above). If you use the rabbitmq-server.bat batch file on Windows, the short node name is upper-case (as in rabbit@RABBIT01). When you type node names, case matters, and these strings must match exactly.

Create the cluster

In order to link up our three nodes in a cluster, we tell two of the nodes, say rabbit@rabbit2 and rabbit@rabbit3, to join the cluster of the third, say rabbit@rabbit1.

We first join rabbit@rabbit2 in a cluster with rabbit@rabbit1. To do that, on rabbit@rabbit2 we stop the RabbitMQ application and join the rabbit@rabbit1 cluster, then restart the RabbitMQ application. Note that joining a cluster implicitly resets the node, thus removing all resources and data that were previously present on that node.

rabbit02$ rabbitmqctl stop_app
Stopping node rabbit@rabbit02 ...done.
rabbit02$ rabbitmqctl join_cluster rabbit@rabbit01
Clustering node rabbit@rabbit02 with [rabbit@rabbit01] ...done.
rabbit02$ rabbitmqctl start_app
Starting node rabbit@rabbit02 ...done.

We can see that the two nodes are joined in a cluster by running the cluster_status command on either of the nodes:

rabbit01$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit01,rabbit@rabbit02]}]},
{running_nodes,[rabbit@rabbit02,rabbit@rabbit01]}]
...done.
rabbit02$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit02 ...
[{nodes,[{disc,[rabbit@rabbit01,rabbit@rabbit02]}]},
{running_nodes,[rabbit@rabbit01,rabbit@rabbit02]}]
...done.

Now we join rabbit@rabbit03 to the same cluster. The steps are identical to the ones above, except this time we’ll cluster to rabbit02 to demonstrate that the node chosen to cluster to does not matter – it is enough to provide one online node and the node will be clustered to the cluster that the specified node belongs to.

rabbit03$ rabbitmqctl stop_app
Stopping node rabbit@rabbit03 ...done.
rabbit03$ rabbitmqctl join_cluster rabbit@rabbit02
Clustering node rabbit@rabbit03 with rabbit@rabbit02 ...done.
rabbit03$ rabbitmqctl start_app
Starting node rabbit@rabbit03 ...done.

We can see that the three nodes are joined in a cluster by running the cluster_status command on any of the nodes:

rabbit01$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit01 ...
[{nodes,[{disc,[rabbit@rabbit01,rabbit@rabbit02,rabbit@rabbit03]}]},
{running_nodes,[rabbit@rabbit03,rabbit@rabbit02,rabbit@rabbit01]}]
...done.
rabbit02$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit02 ...
[{nodes,[{disc,[rabbit@rabbit01,rabbit@rabbit02,rabbit@rabbit03]}]},
{running_nodes,[rabbit@rabbit03,rabbit@rabbit01,rabbit@rabbit02]}]
...done.
rabbit03$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit03 ...
[{nodes,[{disc,[rabbit@rabbit03,rabbit@rabbit02,rabbit@rabbit01]}]},
{running_nodes,[rabbit@rabbit02,rabbit@rabbit01,rabbit@rabbit03]}]
...done.

By following the above steps we can add new nodes to the cluster at any time, while the cluster is running.

Management Plugin

Enable rabbitmq_management in everynode

$ sudo rabbitmq-plugins enable rabbitmq_management

Create admin user on one node, this admin user can be use for any node in cluster

$ sudo rabbitmqctl add_user <user> <password>
$ sudo rabbitmqctl set_user_tags <user> administrator
$ sudo rabbitmqctl set_permissions -p / <user> ".*" ".*" ".*"

Now you can access management website with this user through  xxx.xxx.xxx.xxx:15672
sreenshot 01 - RabbitMq management 01
screenshot 02 - RabbitMQ management 02
screenshot 03 - RabbitMQ management 03

Setup HAProxy

Install haproxy

$ sudo apt-get install haproxy

Verify if HAProxy is working

$ haproxy -v
HA-Proxy version 1.6.3 2015/12/25
Copyright 2000-2015 Willy Tarreau 

Configure HAproxy for RabbitMQ and RabbitMQ_management

$ sudo vim /etc/haproxy/haproxy.cfg
defaults
...
listen rabbitmq
        bind 0.0.0.0:5672
        mode tcp
        balance roundrobin
        timeout client 3h
        timeout server 3h
        option clitcpka
        server rabbit01 xxx.xxx.xxx.xx1:5672 check inter 5s fall 3 rise 2
        server rabbit02 xxx.xxx.xxx.xx2:5672 check inter 5s fall 3 rise 2
        server rabbit02 xxx.xxx.xxx.xx3:5672 check inter 5s fall 3 rise 2
listen rabbitmq_management
        bind 0.0.0.0:15672
        mode tcp
        balance roundrobin
        server rabbit01 xxx.xxx.xxx.xx1:15672 check fall 3 rise 2
        server rabbit02 xxx.xxx.xxx.xx2:15672 check fall 3 rise 2
        server rabbit02 xxx.xxx.xxx.xx3:15672 check fall 3 rise 2
How to set up multiple Tor instances with Polipo on Linux

How to set up multiple Tor instances with Polipo on Linux

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

By default, Tor opens a SOCK proxy on port 9050 that allow internet traffic to go through to access Tor network.

In this post, we will show how to set up multiple Tor instances on the same Linux machine.

If you want to set up multiple Tor instances with Polipo on Windows, read this post instead.

Install Tor

On Ubuntu

$ apt update
$ apt install tor

After that, Tor should be accessible via /usr/bin/tor

Install Polipo

Tor proxy supports SOCKS protocol but does not support HTTP proxy protocol.

To help Tor function as a proxy protocol, we can use Polipo to make a tunnel where Polipo will open a HTTP proxy and transfer the packets between its HTTP proxy and Tor’s SOCKS proxy. That way, an application can leverage Tor’s power even if that application can only communicate through HTTP proxy.

On Ubuntu

$ apt update
$ apt install polipo

Start the first instance

After Tor is installed, we can start Tor right away by running /usr/bin/tor.

However, to run multiple instances of Tor, we still have some extra tasks to do.

First of all, when a Tor instance runs, it will need the following things set up

  • a SOCKSPort, where its proxy listens on
  • a CONTROLPort, where it listens to commands from user
  • a DATADirectory, where it use to save data.

Different instances must have these things set up differently.

Fortunately, Tor allows us to set up these things using command line arguments like this

$ /usr/bin/tor SOCKSPort 10001 CONTROLPort 20001 DATADirectory /tmp/tor10001/data

You can also specify which IP address for your Tor instance to listen on

$ /usr/bin/tor SOCKSPort 127.0.0.1:10001 CONTROLPort 127.0.0.1:20001 DATADirectory /tmp/tor10001/data

Let’s start the first instance now

$ mkdir -p /tmp/tor10001/data
$ /usr/bin/tor SOCKSPort 10001 CONTROLPort 20001 DATADirectory /tmp/tor10001/data

We will also start a Polipo instance that tunnels through that Tor instance

$ polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1"

The command above will start a Polipo instance that has

  • 30001 as HTTP proxy port
  • talks to Tor proxy on port 10001

Start the next instances

In order to start the next instnace, first you have to relaunch the first instance to run in background with nohup command

$ mkdir -p /tmp/tor10001/data
$ nohup /usr/bin/tor SOCKSPort 10001 CONTROLPort 20001 DATADirectory /tmp/tor10001/data &
$ nohup polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="127.0.0.1" &

nohup also keeps your Tor instance running after you log out of your ssh session.

You can now start the next instances following the same pattern, just change the arguments appropriately

$ mkdir -p /tmp/tor10002/data
$ nohup /usr/bin/tor SOCKSPort 10002 CONTROLPort 20002 DATADirectory /tmp/tor10002/data &
$ nohup polipo socksParentProxy="127.0.0.1:10002" proxyPort=30002 proxyAddress="127.0.0.1" &

Notice the params has changed to 10002 and 20002 correspondingly.

Automate the task

If you want to start a lot of instances, you can make a script to start all the instances automatically like this

start_all_tor.sh

#!/bin/bash

a=10001
b=20001
n=10100

echo "Start multiple Tors"
echo "Begin port " $a
echo "End port " $n

while [ $a -le $n ]
do 
    echo "Start Tor on port" $a
    mkdir -p /tmp/tor$a/data
    nohup /usr/bin/tor SOCKSPort $a CONTROLPort $b DATADirectory /tmp/tor$a/data &
    a=$(($a + 1)) 
    b=$(($b + 1)) 
done

The above script will start 100 Tor instances with SOCKSPort from 10001 to 10100, CONTROLPort from 20001 to 20100 with corresponding data folders.

start_all_polipo.sh

#!/bin/bash
a=10001
b=30001
n=10100
echo "Start multiple Polipo "
echo "Begin port " $a
echo "End port " $n

while [ $a -le $n ]
do
    echo "Start Polipo " $a
    nohup polipo socksParentProxy="127.0.0.1:$a" proxyPort=$b proxyAddress="127.0.0.1" &
    a=$(($a + 1))
    b=$(($b + 1))
done

To stop all the instances, you can run a script like this

stop_all_tor.sh

#!/bin/bash
ps aux | grep tor | awk '{print $2}' | xargs kill -9

stop_all_polipo.sh

#!/bin/bash
ps aux | grep polipo | awk '{print $2}' | xargs kill -9
How to monitor any command output in real-time with watch command on Linux

How to monitor any command output in real-time with watch command on Linux

On Linux, watch command helps you refresh the output of a command every second, which functions just like a real-time monitoring. Let’s look at some practical examples.

Monitor log output

One of the most common scenarios in Linux is looking at the log output of an application using the tail command

$ tail -n 10 output.log

This command will output the last 10 lines of the output.log file to the terminal console.

Let’s say you want to check if there is anything new appended to the log file, you will have to re-run the command every time. To do this automatically, just add a watch command at the beginning of the previous command like this

$ watch tail -n 10 output.log

By default, watch command refresh every 2 seconds. If you want to change the interval, add a -n argument like this

$ watch -n 1 tail -n 10 output.log

This will make the command refresh every 1 second, showing the last 10 lines of the log file.

Monitor disk I/O status

You can monitor disk I/O with iostat command.

First, you have to install iostat, which is included in sysstat package

On CentOS

$ yum install sysstat

On Ubuntu

$ apt install sysstat

Then run the command

$ iostat

Monitor I/O status in real-time

$ watch iostat

Monitor network connection

Let’s say you are running a web server on port 80 and want to check how many clients are connecting to the server.

First, we list all connection to the server using netstat command

$ netstat -naltp

Then we filter those connections to port 80 that are established

$ netstat -naltp | grep :80 | grep ESTABLISHED

If you want to count the connections, add a -c argument

$ netstat -naltp | grep :80 | grep ESTABLISHED -c

To refresh the command in real-time, add watch at the beginning

$ watch "netstat -naltp | grep :80 | grep ESTABLISHED -c"

Notice that in the command above, the command to be watched is wrapped inside quotes. This is because without the quotes, the grep command is confused which output stream to consume: watch command or netstat command. To clear out the confusion, we have to put the command we want to “watch” in quotes

Monitor disk usage and disk free space

To monitor disk free space

$ watch df -h

To monitor disk usage

$ watch du -c -h -d 1

Monitor how many instances of a process is running

To monitor how many instances of a process is running, use a ps command with watch

$ watch "ps aux | grep nginx - c"
Top helpful Linux commands for beginners

Top helpful Linux commands for beginners

Useful Command Line Keyboard Shortcuts

The following keyboard shortcuts are incredibly useful and will save you loads of time:

  • CTRL + U – Cuts text up until the cursor.
  • CTRL + K – Cuts text from the cursor until the end of the line
  • CTRL + Y – Pastes text
  • CTRL + E – Move cursor to end of line
  • CTRL + A – Move cursor to the beginning of the line
  • ALT + F – Jump forward to next space
  • ALT + B – Skip back to previous space
  • ALT + Backspace – Delete previous word
  • CTRL + W – Cut word behind cursor
  • CTRL + Insert – Copy selected text
  • Shift + Insert – Pastes text into terminal

sudo !!

If you don’t know it yet, sudo (super user do) is the prefix to run a command with an elevated privilege (like Run As Administrator in Windows).

sudo !! is a useful trick to save you from retyping the previous command that is “permission denied”.

For example, imagine you have entered the following command:

$ apt-get install ranger

The words “Permission denied” will appear unless you are logged in with elevated privileges.

Now, instead of typing this

$ sudo apt-get install ranger

Just type…

$ sudo !!

… and the previous denied command will be executed with sudo privileges.

Pausing Commands And Running Commands In The Background

I have already written a guide showing how to run terminal commands in the background.

So what is this tip about?

Imagine you have opened a file in nano as follows:

$ sudo nano abc.txt

Halfway through typing text into the file you realise that you quickly want to type another command into the terminal but you can’t because you opened nano in foreground mode.

You may think your only option is to save the file, exit nano, run the command and then re-open nano.

All you have to do is press CTRL + Z and the foreground application will pause and you will be returned to the command line. You can then run any command you like and when you have finished return to your previously paused session by entering “fg” into the terminal window and pressing return.

An interesting thing to try out is to open a file in nano, enter some text and pause the session. Now open another file in nano, enter some text and pause the session. If you now enter “fg” you return to the second file you opened in nano. If you exit nano and enter “fg” again you return to the first file you opened within nano.

Use nohup To Run Commands After You Log Out Of An SSH Session

The nohup command is really useful if you use the ssh command to log onto other machines.

So what does nohup do?

Imagine you are logged on to another computer remotely using ssh and you want to run a command that takes a long time and then exit the ssh session but leave the command running even though you are no longer connected then nohup lets you do just that.

For example, if I started downloading a large file on a remote host using ssh without using the nohup command then I would have to wait for the download to finish before logging off the ssh session and before shutting down the laptop.

To use nohup all I have to type is nohup followed by the command as follows:

$ nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &

Running A Linux Command ‘AT’ A Specific Time

The ‘nohup’ command is good if you are connected to an SSH server and you want the command to remain running after logging out of the SSH session.

Imagine you want to run that same command at a specific point in time.

The ‘at’ command allows you to do just that. ‘at’ can be used as follows.

$ at 10:38 PM Fri
at> cowsay 'hello'
at> CTRL + D

The above command will run the program cowsay at 10:38 PM on Friday evening.

The syntax is ‘at’ followed by the date and time to run.

When the at> prompt appears enter the command you want to run at the specified time.

The CTRL + D returns you to the cursor.

There are lots of different date and time formats and it is worth checking the man pages for more ways to use ‘at’.

Man Pages

Man pages give you an outline of what commands are supposed to do and the switches that can be used with them.

The man pages are kind of dull on their own. (I guess they weren’t designed to excite us).

You can however do things to make your usage of man more appealing.

$ export PAGER=most

You will need to install ‘most; for this to work but when you do it makes your man pages more colourful.

You can limit the width of the man page to a certain number of columns using the following command:

$ export MANWIDTH=80

Finally, if you have a browser available you can open any man page in the default browser by using the -H switch as follows:

$ man -H 

Note this only works if you have a default browser set up within the $BROWSER environment variable.

Use htop To View And Manage Processes

Which command do you currently use to find out which processes are running on your computer? My bet is that you are using ps and that you are using various switches to get the output you desire.

Install htop. It is definitely a tool you will wish that you installed earlier.

htop provides a list of all running processes in the terminal much like the file manager in Windows.

You can use a mixture of function keys to change the sort order and the columns that are displayed. You can also kill processes from within htop.

To run htop simply type the following into the terminal window:

$ htop

Navigate The File System Using ranger

If htop is immensely useful for controlling the processes running via the command line then ranger is immensely useful for navigating the file system using the command line.

You will probably need to install ranger to be able to use it but once installed you can run it simply by typing the following into the terminal:

$ ranger

The command line window will be much like any other file manager but it works left to right rather than top to bottom meaning that if you use the left arrow key you work your way up the folder structure and the right arrow key works down the folder structure.

It is worth reading the man pages before using ranger so that you can get used to all keyboard switches that are available.

Cancel A Shutdown

So you started the shutdown either via the command line or from the GUI and you realised that you really didn’t want to do that.

$ shutdown -c

Note that if the shutdown has already started then it may be too late to stop the shutdown.

Another command to try is as follows:

$ pkill shutdown

Killing Hung Processes The Easy Way

Imagine you are running an application and for whatever reason it hangs.

You could use ps -ef to find the process and then kill the process or you could use htop.

There is a quicker and easier command that you will love called xkill.

Simply type the following into a terminal and then click on the window of the application you want to kill.

$ xkill

What happens though if the whole system is hanging?

Hold down the alt and sysrq keys on your keyboard and whilst they are held down type the following slowly:

$ REISUB

This will restart your computer without having to hold in the power button.

Download Youtube Videos

Generally speaking most of us are quite happy for Youtube to host the videos and we watch them by streaming them through our chosen media player.

If you know you are going to be offline for a while (i.e. due to a plane journey or travelling between the south of Scotland and the north of England) then you may wish to download a few videos onto a pen drive and watch them at your leisure.

All you have to do is install youtube-dl from your package manager.

You can use youtube-dl as follows:

$ youtube-dl url-to-video

You can get the url to any video on Youtube by clicking the share link on the video’s page. Simply copy the link and paste it into the command line (using the shift + insert shortcut).

Download Files From The Web With wget

The wget command makes it possible for you to download files from the web using the terminal.

The syntax is as follows:

$ wget path/to/filename

For example:

$ wget http://sourceforge.net/projects/antix-linux/files/Final/MX-krete/antiX-15-V_386-full.iso/download

There are a large number of switches that can be used with wget such as -O which lets you output the filename to a new name.

In the example above I downloaded  AntiX Linux from Sourceforge. The filename antiX-15-V_386-full.iso is quite long. It would be nice to download it as just antix15.iso. To do this use the following command:

$ wget -O antix.iso http://sourceforge.net/projects/antix-linux/files/Final/MX-krete/antiX-15-V_386-full.iso/download

Downloading a single file doesn’t seem worth it, you could easily just navigate to the webpage using a browser and click the link.

If however you want to download a dozen files then being able to add the links to an import file and use wget to download the files from those links will be much quicker.

Simply use the the -i switch as follows:

$ wget -i /path/to/importfile

For more about wget visit http://www.tecmint.com/10-wget-command-examples-in-linux/.

Monitor your server performance with top command

When your machine is running slow and you want to find out what are the causes or what processes are eating a lot of resource, just type top in the command line:

$ top

This will bring up a real-time resource monitoring screen showing CPU usage, disk I/O bottlenecks, RAM usage of every running process and the whole system in general.

To show the full path of running processes, press “c”. To show only the processes’ names, press “c” again.

To show usage of each separate CPU core, press “1”. To show CPU usage as a total, press “1” again.

To kill a process, press “k”, type in the process’s id and press “ENTER”.

You can find more control keys by pressing “h” to bring up the help document. Go back to “top” screen by pressing “ESC”.

Monitor your disk free space with df command

To know how much free space are available on your disks, type

$ df -h

-h argument tells the df command to show the size in a human readable format (2.0GB instead of 2000000000).

Monitor your disk usage of each folder with du command

To know how much space is used up by each file/folder, type

$ du -h -c -d 1 ./*
  • -h tells the command to show the size in a human readable format  (2.0GB instead of 2000000000).
  • -c tells the command to show the total size of the folder, including subfolders and files (instead of showing that folder size only)
  • -d 1 tells the command to list the size of folders and subfolders up to child level 1. You can change it to 0 or 2 or whatever you are interested in.
  • ./* tells the command to do this to all the folders and files that are the children of current folder. You can change this param to any path you want, with or without wildcard (*).

Monitor your server traffic with iftop command

To monitor in real-time how much traffic is going in and out of your server, for each external IP, and each port, type the following command

$ iftop -n -B

If you get permission denied, try running it with sudo privilege.

  • -n tells the command not to try to translate the remote host to domain names but instead leave those as IPs.
  • -B tells the command to show traffic in bytes instead of bits.

If you don’t have iftop installed in your system, you can install it

For Ubuntu users

$ apt install iftop

For Centos/Fedora users

# Install epel-release if not yet available
$ yum install epel-release
# Install iftop
$ yum install iftop

Monitor log file in real-time with watch and tail command

If you have an application that logs to a file and you want to monitor that log in real-time, type the following command

$ watch tail /path/to/log/file.log

The watch command will watch for any update in the output of the tail command and keep printing the latest tail of that log file to the terminal console.

If you want to change the number of lines to print out, change the command to

$ watch tail -n 15 /path/to/log/file.log

Remember to replace 15 with the number of lines that you want.

List all running processes with ps command

To list all running process in your system, type the following command

$ ps aux

To filter the list with only the processes that you are interested in, add a grep command

$ ps aux | grep abc | grep xyz

To count how many processes are running, add a -c argument

$ ps aux | grep abc | grep xyz -c

List all connections with netstat command

To list all connections in your system, type the following command

$ netstat -naltp

To filter the result, add a grep command. For example, to filter all http connection (port 80) that is currently open (ESTABLISHED), type the following command

$ netstat -naltp | grep :80 | grep ESTABLISHED

To count how many results matches the filter, add -c argument

$ netstat -naltp | grep :80 | grep ESTABLISHED -c
How to use nohup to execute commands in the background and keep it running after you exit from a shell promt

How to use nohup to execute commands in the background and keep it running after you exit from a shell promt

Most of the time you login into remote server via ssh. If you start a shell script or command and you exit (abort remote connection), the process/command will get killed. Sometimes a job or command takes a long time. If you are not sure when the job will finish, then it is better to leave job running in background. But, if you log out of the system, the job will be stopped and terminated by your shell. What do you do to keep job running in the background when process gets SIGHUP?

nohup command to the rescue

In these situations, we can use nohup command line-utility which allows to run command/process or shell script that can continue running in the background after we log out from a shell.

nohup is a POSIX command to ignore the HUP (hangup) signal. The HUP signal is, by convention, the way a terminal warns dependent processes of logout.

Output that would normally go to the terminal goes to a file called nohup.out if it has not already been redirected.

nohup command syntax:

The syntax is as follows

$ nohup command-name &
$ exit

or

$ nohup /path/to/command-name arg1 arg2 > myoutput.log &
$ exit

Where,

  • command-name: is name of shell script or command name. You can pass argument to command or a shell script.
  • &: nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.
  • exit: after nohup and & start the command in the background, you will still have to type exit or CTRL-D to go back to bash cursor.

Use jobs -l command to list all jobs:

$ jobs -l

nohup command examples

First, login to remote server using ssh command:

$ ssh user@remote.server.com

I am going to execute a shell script called pullftp.sh:

$ nohup pullftp.sh &

Type exit or press CTRL + D to exit from remote server:

> exit

In this example, I am going to find all programs and scripts with setuid bit set on, enter:

$ nohup find / -xdev -type f -perm +u=s -print > out.txt &

Type exit or press CTRL + D to exit from remote server. The job still keeps on running after you exit.

> exit

nohup is often used in combination with the nice command to run processes on a lower priority.

Please note that nohup does not change the scheduling priority of COMMAND; use nice command for that purpose. For example:

$ nohup nice -n -5 ls / > out.txt &

As you can see nohup keep processes running after you exit from a shell. Read man page of nohup(1) and nice(1) for more information. Please note that nohup is almost available on Solaris/BSD/Linux/UNIX variants.

What’s the difference between nohup and ampersand (&)

Both nohup myprocess.out & or myprocess.out & set myprocess.out to run in the background. After I shutdown the terminal, the process is still running. What’s the difference between them?

nohup catches the hangup signal (see man 7 signal) while the ampersand doesn’t (except the shell is confgured that way or doesn’t send SIGHUP at all).

Normally, when running a command using & and exiting the shell afterwards, the shell will terminate the sub-command with the hangup signal (kill -SIGHUP <pid>). This can be prevented using nohup, as it catches the signal and ignores it so that it never reaches the actual application.

In case you’re using bash, you can use the command shopt | grep hupon to find out whether your shell sends SIGHUP to its child processes or not. If it is off, processes won’t be terminated, as it seems to be the case for you. More information on how bash terminates applications can be found here.

There are cases where nohup does not work, for example when the process you start reconnects the SIGHUP signal, as it is the case here.

Advanced topics

Existing jobs, processes

Some shells (e.g. bash) provide a shell builtin that may be used to prevent SIGHUP being sent or propagated to existing jobs, even if they were not started with nohup. In bash, this can be obtained by using disown -h job; using the same builtin without arguments removes the job from the job table, which also implies that the job will not receive the signal. Before using disown on an active job, it should be stopped by Ctrl-Z, and continued in the background by the bg command.[2] Another relevant bash option is shopt huponexit, which automatically sends the HUP signal to jobs when the shell is exiting normally.[3]

The AIX and Solaris versions of nohup have a -p option that modifies a running process to ignore future SIGHUP signals. Unlike the above-described disown builtin of bash, nohup -p accepts process IDs.[4]

Overcoming hanging

Note that nohupping backgrounded jobs is typically used to avoid terminating them when logging off from a remote SSH session. A different issue that often arises in this situation is that ssh is refusing to log off (“hangs”), since it refuses to lose any data from/to the background job(s).[5][6] This problem can also be overcome by redirecting all three I/O streams:

$ nohup ./myprogram > foo.out 2> foo.err < /dev/null &

Also note that a closing SSH session does not always send a HUP signal to depending processes. Among others, this depends on whether a pseudo-terminal was allocated or not.[7]

Alternatives

Terminal multiplexer

A terminal multiplexer can run a command in a separate session, detached from the current terminal, which means that if the current session ends, the detached session and its associated processes keeps running. One can then reattach to the session later on.

For example, the following invocation of screen will run somescript.sh in the background of a detached session:

screen -A -m -d -S somename ./somescript.sh &

disown command

The disown command is used to remove jobs from the job table, or to mark jobs so that a SIGHUP signal is not sent on session termination.

Here is how you can try it out:

$ pullftp.sh &
$ disown -h
$ exit

From the bash bash(1) man page:

By default, removes each JOBSPEC argument from the table of active jobs. If the -h option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. The -a option, when JOBSPEC is not supplied, means to remove all jobs from the job table; the -r option means to remove only running jobs.

at command

You can use at command to queue a job for later execution. For example, you can run pullftp.sh script to queue (one minute) later execution:

$ echo "pullftp.sh" | at now + 1 minute
Push notification for web browser

Push notification for web browser

I. Introduction

Have you ever seen your friend messages pop-ups on your desktop? Have you ever seen a amazon or lazada’s sale campaign information show up on your desktop even when you are not browsing those website? Those pop-ups are called push notification.
Push notification is a message pop-ups on your client’s desktops / devices. Publishers can send notifications anytime they want and receiver don’t have to be browsing website or to be in apps to receive those notifications. So why push notification are used?

For client

Push notification provide convenience and value to client. They can receive valued information at anytime like hot news, sport score, traffic, weather, flight checking, discount items, sale campaign, etc.

For publisher

Push notification is one of the most direct way to speak to user. They don’t get caught in spam filter, don’t get forgotten in an inbox. They can remind user to use app or visit website, whether they are browsing your website, open your app or not. They can also be use to drive actions, such as:

  • Promotion product or offer to increase sale
  • Improve customer experience
  • Send transaction receipt right away
  • Drive user to other marketing channels
  • Convert unknown user to known customer

In this article, we’ll focus on push notification for web browser. We will figure out how it works and go through some code sample to create our very first push notification system.
Let’s begin!

II. How it works

1. Figure

Figure 01 - Push notification system work flow
Figure 01 – Push notification system work flow

2. Definition

First, we’ll go through some definition that will be used in this article

  • Notification: a message displayed to the user outside of the app’s normal UI (i.e., the browser)
  • Service-worker: a script that your browser runs in the background, separate from a web page, opening the door to features that don’t need a web page or user interaction.
  • Push Server: the middle server, receive request from your server and send it to clients when they are online

3. Concept

The above figure-01 explain almost everything about how Push Notification works. In this part, we are going to explain it a bit deeper.
We have 3 main part in push notification jobs: producer (your server), client (user), and push server (google GCM, google FCM, amazonSNS, etc.). Each part have their own roles:

  • Client: receive push request and pop up notification.
  • Server: manage client token, make push requests.
  • Push server: the middle server, this will receive push request from server and send it to client when they are online.

And this is how they works

  • Step 00: At first, When user browsing your website, you have to register an service-worker and ask user for permission to run service-worker. If permission is not granted, then there’s nothing we can do more. If permission granted, let’s go to next step
  • Step 01: Client register itself to push server with api key and sender id to identify which app and server can send notification request to worker.
  • Step 02: Push server receive register request from client and return a token to worker, this token is now present for service-worker.
  • Step 03: Client receive token from push server and send it to server, server will save it to database for later use.
  • Step 04: Server now had client token. From now, whenever server want to send notification to client, just send notification request to push server with client’s token to identity client and a private key to validate data.
  • Step 05: When push server receive notification request, they will store it and wait for client online to send it to client. When client receive the push notification, service-worker will pop-up notification.

III. Service worker

How can we show notification even when browser wasn’t on? Because the one who show notification is the service-worker. So what is service-worker and how it works. Let’s figure it out.

1. What is service-worker

A service worker is a script that your browser runs in the background, separate from a web page, opening the door to features that don’t need a web page or user interaction. Today, they already include features like push notifications and background sync. In the future, service workers will support other things like periodic sync or geofencing. The core feature discussed in this tutorial is the ability to intercept and handle network requests, including programmatically managing a cache of responses.
The reason this is such an exciting API is that it allows you to support offline experiences, giving developers complete control over the experience.

2. Concept

Figure 02 - Service worker life circle
Figure 02 – Service worker life circle

A service worker has a lifecycle that is completely separate from your web page.
To install a service worker for your site, you need to register it, which you do in your page’s JavaScript. Registering a service worker will cause the browser to start the service worker install step in the background.
Typically during the install step, you’ll want to cache some static assets. If all the files are cached successfully, then the service worker becomes installed. If any of the files fail to download and cache, then the install step will fail and the service worker won’t activate (i.e. won’t be installed). If that happens, don’t worry, it’ll try again next time. But that means if it does install, you know you’ve got those static assets in the cache.
When installed, the activation step will follow and this is a great opportunity for handling any management of old caches, which we’ll cover during the service worker update section.
After the activation step, the service worker will control all pages that fall under its scope, though the page that registered the service worker for the first time won’t be controlled until it’s loaded again. Once a service worker is in control, it will be in one of two states: either the service worker will be terminated to save memory, or it will handle fetch and message events that occur when a network request or message is made from your page.

3. Sampe Code

Because of security reason – you can just register/run service-worker in some enviroment like https or localhost, etc… So for testing code, we’re gonna use localhost, you can easily build a simple server with extension “web server for Chrome”. For more information, read this

Register service-worker

main.js

if (‘serviceWorker’ in navigator) {
    navigator.serviceWorker.register(‘/serviceworker.js’)
    .then (function (reg) {
        console.log(‘Serviceworker registered successfully with scope: ‘,  reg.scope)
    }
    .catch (function(err) {
        console.log(“Something wrong: “, err);
    });
}

serviceworker.js

self.addEventListener('install', function(event) {
    // Handle install event
    console.log('install');
});
self.addEventListener('activate', function(event) {
    // Handle activate event
    console.log('activate');
});

When you run register command, service worker will install then activate automatically.
screenshot - 01 - Serviceworker register
To knwow more about service-worker register process (first time / update), read this article

Grant permission from user

Now your service worker is install and activate. But you have to grant permission from user to do stuff like notification.
main.js

notification.requestPermission(function(status) {
    console.log('Notification permission status:', status);
});

screenshot - 02 - request permission
This is one time action, after user allow or block, setting will save in notification exception
screenshot - 03 - permission result

Sumary code

main.js

navigator.serviceWorker.register('/serviceworker.js').then(function (reg) {
    console.log(reg);
})
switch (Notification.permission) {
    case 'granted':
        console.log('already granted');
        break;
    case 'denied':
        console.log('blocked!!')
        break;
    case 'default':
    default:
        console.log('get permission')
        Notification.requestPermission(function(status) {
            console.log('Notification permission status:', status);
        });
        break;
};

serviceworker.js

self.addEventListener('install', function(event) {
    // Handle install event
    console.log('install');
});
self.addEventListener('activate', function(event) {
    // Handle activate event
    console.log('activate');
});

IV. SIMPLE NOTIFICATION (WITHOUT PUSH SERVER)

In this section, we will go through how to show notification directly from your website.

1. Concept

Figure 03 - Simple notification work flow
Figure 03 – Simple notification work flow

Like we discussed above, the one who pop-ups notification is service-worker. So, when you have a activated and permission granted service-worker, you now can command service-worker pop-ups notification. You can perform this command directly from your website, and yes, of course, when you perform this way, you can only pop-ups notification when user are browsing your website. Let’s go through some sample code!

2. Sample Code

Command from main.js

main.js

function displayNotification() {
    if (Notification.permission == 'granted') {
        navigator.serviceWorker.getRegistration()
        .then(function(reg) {
            reg.showNotification('Hello world!');
        })
    }
}
displayNotification();

Command from service-worker itself

E.g. Auto show notification when service-worker install

// Listen for install event, set callback
self.addEventListener('install', function(event) {
    // Perform some task
    console.log('install');
    self.registration.showNotification("installed");
});

V. NOTIFICATION WITH PUSH SERVER (FCM)

In this section, we will go through how full push notification system works.

1. Concept

Figure 01 - Push notification system work flow
Figure 01 – Push notification system work flow

In this tutorial, we’ll use FCM push server, FCM push server is an Google service, it’s later version of GCM. For more information, read this

2. Sample Code

a. Create FCM app

First, we register our app to FCM

Go to this website: https://console.firebase.google.com
Add project

screenshot - 04 - FCM add project - 01
screenshot - 04 - FCM add project - 02

Now let’s store our information

Public key(info)
screenshot - 05 - FCM public info
Server(private) key
screenshot - 06 - FCM private key
You can use server key or legacy server key to validate your push message request. I usually use the short one (legacy server key).

b. Register user to push server, get returned token

main.js

importScripts("https://www.gstatic.com/firebasejs/4.1.3/firebase.js");

// Initialize Firebase
var config = {
    apiKey: "...",
    authDomain: "...firebaseapp.com",
    databaseURL: "https://...firebaseio.com",
    projectId: "...",
    storageBucket: "...",
    messagingSenderId: "..."
};
firebase.initializeApp(config);

// Get token
const messaging = firebase.messaging();

messaging.requestPermission()
.then(function() {
    	console.log("Permission granted");
    	return messaging.getToken();
})
.then(function(token) {
    	console.log(“Token: ”, token);
	// send token to your server here
})
.catch(function(err) {
    	console.log("Err: ", err);
})

// handle receive message
messaging.onMessage(function(payload) {
console.log("payload: ", payload);
})

Messaging is firebase’s message library. When you call messaging.requesPermission(). It will look for a serviceworker name ‘firebase-messaging-sw.js’ and then do the register stuffs. So we have to create our serviceworker to let those code run successfully.
firebase-messaging-sw.js

// welcome to firebase-messaging
    // do nothing

Run main.js and you will get something like this
screenshot - 07 - fcm get token
This token now present for your browser and serviceworker. Send those token to your server and we are ready to push notification.
There are 2 case when you receive request from push server: you’re focusing on website or not. If you are NOT focusing on website, the message will go to service worker and pop-ups notification. If you are focusing, it will trigger event ‘messaging.onMessage’ so you can handle freely, of course you can command service worker pop-ups notification but in my experience it will make user feel annoyed and distracted. Let’s try our magic!
Send notification from your server to fcm server

URL: https://fcm.googleapis.com/fcm/send
METHOD: POST
HEADER:
    Content-Type: application/json
    Authorization: key=<your secret key>
BODY:
    {
        "to": <service-worker's token>,
        "notification": {
            "title": "my title",
            "content": "my content"
        }
    }

When you name the data notification, service worker will pop-ups automatically if possible. Here the result,

Focusing: receive data

screenshot 08 - fcm msg receive on website

Un-Focusing: pop-up notification

screenshot 08 - fcm msg received on serviceworker
Congratulations!! Now you have learned how to push notification. You can now improve your website, increasing your user’s experiment.
In the next section, we will go a little bit deeper about FCM, how to full-control your push event.

VI. FCM ADVANCE – EVENT HANDLE

1. Custom service worker file location and name

Like we discussed above, when you use firebase message, they will automatically find the service worker name ‘firebase-messaging-sw.js’ at home base of your website. So what will we do when we want to keep it in another location or use another file name. Here the solutions, we’re gonna register a service worker first then make firebase use this serviceworker as their firebase-messaging-sw. And here the code:
main.js

var config = {
...
};
firebase.initializeApp(config);

const messaging = firebase.messaging();

navigator.serviceWorker.register('/path/to/your/service-worker/serviceworker.js')
.then(function(reg) {
    messaging.useServiceWorker(reg);

    messaging.requestPermission()
    .then(function() {
        console.log("Permission granted");
        return messaging.getToken();
    })
    .then(function(token) {
        console.log("Token: ", token);
    })
    .catch(function(err) {
        console.log("Err: ", err);
    })
})

2. Full-control received message

The firebase will pop-ups notification when you send notification command, but what if you want to handle data or content manually from service worker, here the solutions

  • When user focusing on your website, just like before, you can handle the message in onMessage event, so there’s nothing to do more here.
  • When user NOT focusing on your website, the message will be received by service worker so we’re gonna add message handle for service worker.

Let’s view the code:
serviceworker.js

messaging.setBackgroundMessageHandler(function(payload) {
    // do whatever you want here, in this example we pop-ups notification
    const title = payload.data.title;
    const options = {
        body: payload.data.content
    };
    return self.registration.showNotification(title, options);
});

** messaging.setBackgoundMessageHangler is just like messaging.onMessage event for serviceworker.

3. Handle token change

Sometimes, FCM change token of client, it’s not very critical but user will can’t receive message until next time they register the service-worker. So if you want to update token immediately, you have to handle those event. Here the code:
serviceworker.js

// Callback fired if Instance ID token is updated.
messaging.onTokenRefresh(function() {
    messaging.getToken()
    .then(function(refreshedToken) {
        console.log('Token refreshed.');
	// send new token info to server here
    })
    .catch(function(err) {
        console.log('Unable to retrieve refreshed token ', err);
    });
});

4. Send message to group of users

This is the solutions of sending bulk push message. The idea is adding user’s tokens into groups. Whenever you want to send all message to user in that group, you just have to send only one message. Let’s figure it out.

Register token to group

Register one token
URL: https://iid.googleapis.com/iid/v1/<user’s Token>/rel/topics/<Topic name>
METHOD: POST
HEADER: 
    Content-Type: application/json
    Authorization: key=<your secret key>
Bulk register
URL: https://iid.googleapis.com/iid/v1:batchAdd
METHOD: POST
HEADER:
    Content-Type: application/json
    Authorization: key=<your secret key>
    Cache-Control: no-cache
BODY:
    {
        "to": "/topics/<TOPIC NAME>",
        "registration_tokens": <"Token1", “Token2”,...>
    }

Send push message to group

URL: https://fcm.googleapis.com/fcm/send
METHOD: POST
HEADER:
    Authorization: key=<your server secret key>
    Content-Type: application/json
BODY
    {
        "to": "/topics/<TOPIC NAME>",
        "notification": {
            "title": "my title",
            "body": "my content"
        }
    }

For more information: read this

VII. NOTIFICATION – STRUCTURE AND EVENT HANDLE

Uptil now in this article, we just pop-ups the most simple notification with title and body. In this section, we will talk about all elements of an notification, and how we let them do some stuff like boozing, or open new tab onclick, etc. Let’s begin!

1. Notification structure

This is the API of showing a notification

(ServiceWorkerRegistration).showNotification(<title>, <options>);

The <title> is a string, and the <options> can be any of the following:

{
  	"//": "Visual Options",
  	"body": "<String>",
  	"icon": "<URL String>",
  	"image": "<URL String>",
  	"badge": "<URL String>",
  	"vibrate": "<Array of Integers>",
  	"sound": "<URL String>",
  	"dir": "<String of 'auto' | 'ltr' | 'rtl'>",

  	"//": "Behavioural Options",
  	"tag": "<String>",
  	"data": "<Anything>",
  	"requireInteraction": "<boolean>",
  	"renotify": "<Boolean>",
  	"silent": "<Boolean>",

  	"//": "Both Visual & Behavioural Options",
  	"actions": "<Array of Strings>",

  	"//": "Information Option. No visual affect.",
  	"timestamp": "<Long>"
}

2. Event handle

We will go through some important notification event

Install event: trigger after install

serviceworker.js

// Listen for install event, set callback
self.addEventListener('install', function(event) {
    // Perform some task
    console.log('install');
});

Active event: trigger after active

serviceworker.js

self.addEventListener('activate', function(event) {
    // Perform some task
    console.log('activate');
});

Close event: trigger after close

serviceworker.js

self.addEventListener('notificationclose', function(event) {
    // Perfrom some task
    console.log('Closed notification’);
})

Click event: trigger after click

serviceworker.js

self.addEventListener('notificationclick', function(event) {
    var notification = event.notification;
    var action = event.action;
    switch (action) {
        case <action-name>:  	// define in options->action
            break;
        default:		// unrecognized actions (click on body)
            break;
    }
})

** There are also push event, but since we use firebase messaging, we use message.onMessage or message.setBackgroundMessageHandler instead.