WordPress is the most popular website framework in the world, powering 30% of the web all around the world. Over 60 million people have chosen WordPress to power the place on the web they call “home”.
In this tutorial, we will demonstrate how to setup multiple WordPress websites on a same Ubuntu 16.04 server. The setup includes nginx, MySQL, PHP, and WordPress itself.
Prerequisites
Before you complete this tutorial, you should have a regular, non-root user account on your server with sudo privileges. You can learn how to set up this type of account by completing DigitalOcean’s Ubuntu 16.04 initial server setup.
Once you have your user available, sign into your server with that username. You are now ready to begin the steps outlined in this guide.
You should see HTTP traffic allowed in the displayed output:
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
With the new firewall rule added, you can test if the server is up and running by accessing your server’s domain name or public IP address in your web browser.
Navigate your browser to test if nginx is working:
Test Nginx’s default landing page in browser
http://server_domain_or_IP
If you see the above page, you have successfully installed Nginx.
Step 2. Install MySQL to Manage Site Data
Now that we have a web server, we need to install MySQL, a database management system, to store and manage the data for our site.
Install MySQL server
$ sudo apt-get install mysql-server
You will be asked to supply a root (administrative) password for use within the MySQL system.
The MySQL database software is now installed, but its configuration is not exactly complete yet.
To secure the installation, we can run a simple security script that will ask whether we want to modify some insecure defaults. Begin the script by typing:
Secure MySQL installation
$ sudo mysql_secure_installation
You will be asked to enter the password you set for the MySQL root account. Next, you will be asked if you want to configure the VALIDATE PASSWORD PLUGIN.
Warning: Enabling this feature is something of a judgment call. If enabled, passwords which don’t match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.
Answer y for yes, or anything else to continue without enabling.
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?
Press y|Y for Yes, any other key for No:
If you’ve enabled validation, you’ll be asked to select a level of password validation. Keep in mind that if you enter 2, for the strongest level, you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.
There are three levels of password validation policy:
LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary file
Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1
If you enabled password validation, you’ll be shown a password strength for the existing root password, and asked you if you want to change that password. If you are happy with your current password, enter n for “no” at the prompt:
Using existing password for root.
Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n
For the rest of the questions, you should press Y and hit the Enter key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.
At this point, your database system is now set up and we can move on.
Step 3. Install PHP for Processing
We now have Nginx installed to serve our pages and MySQL installed to store and manage our data. However, we still don’t have anything that can generate dynamic content. We can use PHP for this.
Since Nginx does not contain native PHP processing like some other web servers, we will need to install php-fpm, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing.
We can install this module and will also grab an additional helper package that will allow PHP to communicate with our database backend. The installation will pull in the necessary PHP core files. Do this by typing:
Install php-fpm and php-mysql
$ sudo apt-get install php-fpm php-mysql
Configure the PHP Processor
We now have our PHP components installed, but we need to make a slight configuration change to make our setup more secure.
Open the main php-fpm configuration file with root privileges:
$ sudo nano /etc/php/7.0/fpm/php.ini
What we are looking for in this file is the parameter that sets cgi.fix_pathinfo. This will be commented out with a semi-colon (;) and set to “1” by default.
This is an extremely insecure setting because it tells PHP to attempt to execute the closest file it can find if the requested PHP file cannot be found. This basically would allow users to craft PHP requests in a way that would allow them to execute scripts that they shouldn’t be allowed to execute.
We will change both of these conditions by uncommenting the line and setting it to “0” like this:
Change cgi.fix_pathinfo
# /etc/php/7.0/fpm/php.ini
cgi.fix_pathinfo=0
Save and close the file when you are finished.
Now, we just need to restart our PHP processor by typing:
Restart PHP processor
$ sudo systemctl restart php7.0-fpm
This will implement the change that we made.
Step 4. Install WordPress
The plan
To make the best use of the server, we will setup the server so that we can also host other websites in PHP or Python in the future. To do that, we will have the following arrangement:
The full configuration for each website will be in the above mentioned file. The default nginx configuration file will be removed.
/www folder will be chown to nginx’s user
By default, nginx will run as www-data or nginx, you can find this out by running ps aux | grep nginx in your terminal)
Using this arrangement, we can freely add future websites in such manner so everything is kept organized.
Step 4.1. Create database for your WordPress
Login to MySQL shell
$ mysql -u root -p
Create database and mysql user for our WordPress
mysql> CREATE DATABASE wordpress CHARACTER SET utf8 COLLATE utf8_unicode_ci;
mysql> CREATE USER 'wpuser'@'localhost' IDENTIFIED BY 'wppassword';
mysql> GRANT ALL PRIVILEGES ON wordpress.* TO 'wpuser'@'localhost';
mysql> FLUSH PRIVILEGES;
mysql> SHOW GRANTS FOR 'wpuser'@'localhost';
A common practice here is to use website name as database name, and also as database user name. This way, each database user will have access to only its database, and the naming convention is easy to remember.
Step 4.2. Setup WordPress folder
Change directory to /www
$ cd /www
Download the latest WordPress.
$ wget http://wordpress.org/latest.tar.gz
Extract it.
$ tar -zxvf latest.tar.gz
Move it to our document root.
$ mv wordpress/* /www/hexadix.com
Copy the wp-sample-config.php file and make it as wp-config.php file.
Edit the config file and mention the database information.
$ vi /www/hexadix.com/wp-config.php
Default will look like below.
# /www/hexadix.com/wp-config.php
// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'database_name_here');
/** MySQL database username */
define('DB_USER', 'username_here');
/** MySQL database password */
define('DB_PASSWORD', 'password_here');
/** MySQL hostname */
define('DB_HOST', 'localhost');
Modified entries according to the created database user and database will look like.
# /www/hexadix.com/wp-config.php
// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpress');
/** MySQL database username */
define('DB_USER', 'wpuser');
/** MySQL database password */
define('DB_PASSWORD', 'wppassword');
/** MySQL hostname */
define('DB_HOST', 'localhost');
Make nginx user as the owner to WordPress directory.
$ chown -R www-data:www-data /www/hexadix.com/
Step 4.3. Configure nginx
Open nginx main configuration file
$ sudo vi /etc/nginx/nginx.conf
Look for the line below.
include /etc/nginx/sites-enabled/*;
Change it to
include /etc/nginx/sites-enabled/*.conf;
The purpose of this step is to tell nginx to only load files with .conf extension. This way, whenever we want to disable a website, we only need to change the extension of that website’s configuration file to something else, which is more convenient.
Test new nginx configuration
$ sudo nginx -t
Reload nginx
$ sudo systemctl reload nginx
If everything works correctly, when you navigate your browser to your website now, the default Welcome to Nginx page would have been disabled.
Create nginx configuration file for our WordPress
$ sudo vi /etc/nginx/sites-enabled/hexadix.com.conf
Remember to change the server_name and root to match your website. Also, the fastcgi_pass path may differ in your server installation.
Test new nginx configuration
$ sudo nginx -t
Reload nginx
$ sudo systemctl reload nginx
If everything works correctly, by this time you can navigate to your domain (in this case http://hexadix.com) and see your WordPress Installation Wizard page. Follow the instruction on the Wizard to setup an admin account and you’re done.
You can now login to your WordPress administrator site by navigating to:
http://yourdomain.com/wp-admin/
Your front page is at:
http://yourdomain.com
Setup multiple WordPress
To setup another WordPress on the same server, repeat the whole Step 4 in the same manner with the corresponding configurations.
Conclusion
You should now have multiple WordPress websites on your Ubuntu 16.04. Now you can setup as many additional websites as you may need by following the same steps in the tutorial.
How to setup Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 16.04
The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.
In this guide, we will demonstrate how to install a LEMP stack on an Ubuntu 16.04 server. The Ubuntu operating system takes care of the first requirement. We will describe how to get the rest of the components up and running.
Prerequisites
Before you complete this tutorial, you should have a regular, non-root user account on your server with sudo privileges. You can learn how to set up this type of account by completing DigitalOcean’s Ubuntu 16.04 initial server setup.
Once you have your user available, sign into your server with that username. You are now ready to begin the steps outlined in this guide.
Step 1: Install the Nginx Web Server
In order to display web pages to our site visitors, we are going to employ Nginx, a modern, efficient web server.
All of the software we will be using for this procedure will come directly from Ubuntu’s default package repositories. This means we can use the apt package management suite to complete the installation.
Since this is our first time using apt for this session, we should start off by updating our local package index. We can then install the server.
Install nginx
sudo apt-get update
sudo apt-get install nginx
On Ubuntu 16.04, Nginx is configured to start running upon installation.
If you are have the ufw firewall running, as outlined in our initial setup guide, you will need to allow connections to Nginx. Nginx registers itself with ufw upon installation, so the procedure is rather straight forward.
It is recommended that you enable the most restrictive profile that will still allow the traffic you want. Since we haven’t configured SSL for our server yet, in this guide, we will only need to allow traffic on port 80.
Enable ufw to allow HTTP
sudo ufw allow 'Nginx HTTP'
Verify the change
sudo ufw status
You should see HTTP traffic allowed in the displayed output:
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
With the new firewall rule added, you can test if the server is up and running by accessing your server’s domain name or public IP address in your web browser.
If you do not have a domain name pointed at your server and you do not know your server’s public IP address, you can find it by typing one of the following into your terminal:
Find server’s public IP address
ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
This will print out a few IP addresses. You can try each of them in turn in your web browser.
As an alternative, you can check which IP address is accessible as viewed from other locations on the internet:
Alternative way to find server’s public IP address
curl -4 icanhazip.com
Type one of the addresses that you receive in your web browser. It should take you to Nginx’s default landing page:
Test Nginx’s default landing page in browser
http://server_domain_or_IP
If you see the above page, you have successfully installed Nginx.
Step 2: Install MySQL to Manage Site Data
Now that we have a web server, we need to install MySQL, a database management system, to store and manage the data for our site.
Install MySQL server
sudo apt-get install mysql-server
You will be asked to supply a root (administrative) password for use within the MySQL system.
The MySQL database software is now installed, but its configuration is not exactly complete yet.
To secure the installation, we can run a simple security script that will ask whether we want to modify some insecure defaults. Begin the script by typing:
Secure MySQL installation
sudo mysql_secure_installation
You will be asked to enter the password you set for the MySQL root account. Next, you will be asked if you want to configure the VALIDATE PASSWORD PLUGIN.
Warning: Enabling this feature is something of a judgment call. If enabled, passwords which don’t match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.
Answer y for yes, or anything else to continue without enabling.
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?
Press y|Y for Yes, any other key for No:
If you’ve enabled validation, you’ll be asked to select a level of password validation. Keep in mind that if you enter 2, for the strongest level, you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.
There are three levels of password validation policy:
LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary file
Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1
If you enabled password validation, you’ll be shown a password strength for the existing root password, and asked you if you want to change that password. If you are happy with your current password, enter n for “no” at the prompt:
Using existing password for root.
Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n
For the rest of the questions, you should press Y and hit the Enter key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.
At this point, your database system is now set up and we can move on.
Step 3: Install PHP for Processing
We now have Nginx installed to serve our pages and MySQL installed to store and manage our data. However, we still don’t have anything that can generate dynamic content. We can use PHP for this.
Since Nginx does not contain native PHP processing like some other web servers, we will need to install php-fpm, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing.
We can install this module and will also grab an additional helper package that will allow PHP to communicate with our database backend. The installation will pull in the necessary PHP core files. Do this by typing:
Install php-fpm and php-mysql
sudo apt-get install php-fpm php-mysql
Configure the PHP Processor
We now have our PHP components installed, but we need to make a slight configuration change to make our setup more secure.
Open the main php-fpm configuration file with root privileges:
sudo nano /etc/php/7.0/fpm/php.ini
What we are looking for in this file is the parameter that sets cgi.fix_pathinfo. This will be commented out with a semi-colon (;) and set to “1” by default.
This is an extremely insecure setting because it tells PHP to attempt to execute the closest file it can find if the requested PHP file cannot be found. This basically would allow users to craft PHP requests in a way that would allow them to execute scripts that they shouldn’t be allowed to execute.
We will change both of these conditions by uncommenting the line and setting it to “0” like this:
Change cgi.fix_pathinfo
# /etc/php/7.0/fpm/php.ini
cgi.fix_pathinfo=0
Save and close the file when you are finished.
Now, we just need to restart our PHP processor by typing:
Restart PHP processor
sudo systemctl restart php7.0-fpm
This will implement the change that we made.
Step 4: Configure Nginx to Use the PHP Processor
Now, we have all of the required components installed. The only configuration change we still need is to tell Nginx to use our PHP processor for dynamic content.
We do this on the server block level (server blocks are similar to Apache’s virtual hosts). Open the default Nginx server block configuration file by typing:
Open nginx default configuration file
sudo nano /etc/nginx/sites-available/default
Currently, with the comments removed, the Nginx default server block file looks like this:
We need to make some changes to this file for our site.
First, we need to add index.php as the first value of our index directive so that files named index.php are served, if available, when a directory is requested.
We can modify the server_name directive to point to our server’s domain name or public IP address.
For the actual PHP processing, we just need to uncomment a segment of the file that handles PHP requests by removing the pound symbols (#) from in front of each line. This will be the location ~\.php$ location block, the included fastcgi-php.conf snippet, and the socket associated with php-fpm.
We will also uncomment the location block dealing with .htaccess files using the same method. Nginx doesn’t process these files. If any of these files happen to find their way into the document root, they should not be served to visitors.
The changes that you need to make are in bold in the text below:
When you’ve made the above changes, you can save and close the file.
Test your configuration file for syntax errors by typing:
Test nginx configurations
sudo nginx -t
If any errors are reported, go back and recheck your file before continuing.
When you are ready, reload Nginx to make the necessary changes:
Reload nginx
sudo systemctl reload nginx
Step 5: Create a PHP File to Test Configuration
Your LEMP stack should now be completely set up. We can test it to validate that Nginx can correctly hand .php files off to our PHP processor.
We can do this by creating a test PHP file in our document root. Open a new file called info.php within your document root in your text editor:
Create a new PHP file to test PHP installation
sudo nano /var/www/html/info.php
Type or paste the following lines into the new file. This is valid PHP code that will return information about our server:
# /var/www/html/info.php
<?php
phpinfo();
When you are finished, save and close the file.
Now, you can visit this page in your web browser by visiting your server’s domain name or public IP address followed by /info.php:
http://server_domain_or_IP/info.php
You should see a web page that has been generated by PHP with information about your server:
f you see a page that looks like this, you’ve set up PHP processing with Nginx successfully.
After verifying that Nginx renders the page correctly, it’s best to remove the file you created as it can actually give unauthorized users some hints about your configuration that may help them try to break in. You can always regenerate this file if you need it later.
For now, remove the file by typing:
sudo rm /var/www/html/info.php
Conclusion
You should now have a LEMP stack configured on your Ubuntu 16.04 server. This gives you a very flexible foundation for serving web content to your visitors.
How to type math notations using LaTex syntax in your WordPress
If you have Jetpack by WordPress.com plugin installed and activated in your WordPress, the good news is Jetpack comes with a LaTex rendering function out of the box.
To use this function, first ensure that you have enabled it in Jetpack’s settings.
Now you can go to your blog post and insert the following code:
If you are using WordPress and want to include the above script, you can use Insert Headers and Footers plugin and paste the script in the plugin’s setting.
After the script is included, all the LaTex you type will turn into beautiful math notations effortlessly.
Examples
Example #1
When \(a \ne 0\), there are two solutions to \(ax^2 + bx + c = 0\) and they are
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$
will output:
When \(a \ne 0\), there are two solutions to \(ax^2 + bx + c = 0\) and they are
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$
Notice that using \( ... \) results in inline LaTex block, while using $$ ... $$ results in LaTex block in a separate line.
By this time, you may have already wondered, what if you want to show the plain original LaTex code instead of the converted math notations, like the LaTex codes pasted in the above examples?
The answer is simpler than expected: just put the LaTex code nested inside a <pre> </pre> tag. MathJax is smart enough to skip anything that is nested inside <pre> </pre>.
Now, if you want that original LaTex codes beautifully highlighted, you may want to check Google’s code-prettify library. But that’s another story :D.
This post discusses how to leverage keepalived features to proxy request(s) (both internal and external) with only 2 proxy servers, without forfeiting high availability.
Prerequisites
2 installed CentOS with NginX server, a spare LAN IP, and a spare WAN IP from your cloud service.
Deployment
1. Install keepalived (if not already present):
yum install keepalived
2. Bind IP which not defined in system (kernel level)
This step help kernel understand that a interface can have 2 ip address.
Add this to /etc/sysctl.conf:
net.ipv4.ip_nonlocal_bind = 1
Force sysctl to apply new setting:
sysctl -p
3. Configure keepalived at BOTH proxy
Edit /etc/keepalived/keepalived.conf with the content below (there is a default config file when installed, ignore it):
vrrp_sync_group VG_1 {
group { WAN_1 }
group { LAN_1 }
}
vrrp_instance WAN_1 { #master WAN
#just a name
state MASTER # BACKUP in other proxy
interface eth0
virtual_router_id 3
dont_track_primary
#LOWER is SLAVE
priority 90 # should be <90 in other proxy
preempt_delay 30
garp_master_delay 1
advert_int 2
authentication {
auth_type PASS
auth_pass yourpass
}
track_interface {
eth2
}
virtual_ipaddress {
130.65.156.140/24 dev eth2
}
}
vrrp_instance LAN_1 { #backup LAN
#just a name
state BACKUP #MASTER in other proxy
interface eth0
virtual_router_id 4
dont_track_primary
#LOWER is SLAVE
priority 80 # should be >90 in other proxy
preempt_delay 30
garp_master_delay 1
advert_int 2
authentication {
auth_type PASS
auth_pass yourpass
}
track_interface {
eth1
}
virtual_ipaddress {
10.5.247.10/24 dev eth1
}
}
4. Set BOOTPROTO=”static” in both 4 interface.
In some cloud environment, there is a periodically restart on interface (still get same IP address), but it would not start keepalived service if it’s dynamic in WAN interface 5. set chkconfig keepalived on (auto service when booted) 6. use ip addr show <interface> (with eth1 or eth0 in both proxy to check status)
Explanation
there are maximum 255 allowed instances on a proxy using vrrp
instance WAN_1 consider proxy 1 as master of WAN traffic (any request to 130.65.156.140) will be proxied to web1 and web2 (based on nginx configuration of upstream, there are several load-balancing mechanism to apply (ip hash, round-robin, least-connect). if proxy 1 fail over, proxy 2 will take Master role of WAN traffic and proxied.
instance LAN_1 consider proxy 2 as master LAN traffic (DB request/return in this scenario – to 10.5.247.10)
either one of proxy is fail over (completely off in all interface), another will take role Master both LAN and WAN traffic, which mean eliminate Single Point Failure
flow of traffic:
http request from user will point to VIP WAN –> e2 (proxy 1) default –> load balancing to both web server through e1 (proxy 1)
e1 (proxy 2) ALWAYS ready to forward load balancing http to both web server, but there is no input traffic to e2 (proxy 2) until it claims VIP WAN; therefore e1 (proxy 2) is in idle to forward http traffic to both webs
db request from web server will point to VIP LAN –> e1 (proxy 2) default–> load balancing to both db server through e1 (proxy 2)
e1 (proxy 1) ALWAYS ready to forward load balance to both db server, but there is no input db traffic to e1 (proxy 1) until it claims VIP LAN
Important parameter(s):
interface : interface to use exchange packet of vrrp protocols for every instances. should be local ethernet interface both case
priority: lower is BACKUP, higher is MASTER for each vrrp instance
dont_track_primary: we use local ethernet interface to exchange vrrp information and need another interface to healthcheck other side interface (track_interface parameter). Checking primary interface health can cause an issue due sleep state of interface (but not completely fail over)
virtual_ipaddress : the address should correspondence master interface take
Discussion: How about only e1 (proxy 1) fail over (other interfaces still work)? and solution ? Please leave your comment
Nginx security vulnerabilities and hardening best practices – part I
At the moment, nginx is one the of most popular web server. It is lightweight, fast, robust, supports the major operating systems and is the web server of choice for Netflix, WordPress.com and other high traffic sites. nginx can easily handle 10,000 inactive HTTP connections with as little as 2.5M of memory. In this article, I will provide tips on nginx server security, showing how to secure your nginx installation.
After installing nginx, you should gain a good understanding of nginx’s configuration settings which are found in nginx.conf. This is the main configuration file for nginx and therefore most of the security checks will be done using this file. By default nginx.conf can be found in [Nginx Installation Directory]/conf on Windows systems, and in the /etc/nginx or the /usr/local/etc/nginx directories on Linux systems.
#1. Turn on SELinux
Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides a mechanism for supporting access control security policies which provides great protection. It can stop many attacks before your system rooted. See how to turn on SELinux for CentOS / RHEL based systems.
#2. Hardening /etc/sysctl.conf
You can control and configure Linux kernel and networking settings via /etc/sysctl.conf.
# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1
# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1
# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1
# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
# Don't act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1
# Tuen IPv6
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1
# Optimization for port usefor LBs
# Increase system file descriptor limit
fs.file-max = 65535
# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
kernel.pid_max = 65536
# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000
# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608
# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1
#3. Disable any unwanted nginx modules
Nginx modules are automatically included during installation of nginx and no run-time selection of modules is currently supported, therefore disabling certain modules would require re-compilation of nginx. It is recommended to disable any modules which are not required as this will minimize the risk of any potential attacks by limiting the operations allowed by the web server. To do this, you would need to disable these modules with the configure option during installation. The example below disables the auto index module, which generates automatic directory listings and recompiles nginx.
$ ./configure --without-http_autoindex_module
$ make
$ make install
#4. Disable nginx server_tokens
By default the server_tokens directive in nginx displays the nginx version number in all automatically generated error pages. This could lead to unnecessary information disclosure where an unauthorized user would be able to gain knowledge about the version of nginx that is being used. The server_tokens directive should be disabled from the nginx configuration file by setting – server_tokens off.
#5. Install SELinux policy
By default SELinux will not protect the nginx web server. However, you can install and compile protection as follows. First, install required SELinux compile time support:
The following firewall script blocks everything and only allows:
Incoming HTTP (TCP port 80) requests
Incoming ICMP ping requests
Outgoing ntp (port 123) requests
Outgoing smtp (TCP port 25) requests
#!/bin/bash
IPT="/sbin/iptables"
#### IPS ######
# Get server public ip
SERVER_IP=$(ifconfig eth0 | grep 'inet addr:' | awk -F'inet addr:' '{ print $2}' | awk '{ print $1}')
LB1_IP="204.54.1.1"
LB2_IP="204.54.1.2"
# Do some smart logic so that we can use damm script on LB2 too
OTHER_LB=""
SERVER_IP=""
[[ "$SERVER_IP" == "$LB1_IP" ]] && OTHER_LB="$LB2_IP" || OTHER_LB="$LB1_IP"
[[ "$OTHER_LB" == "$LB2_IP" ]] && OPP_LB="$LB1_IP" || OPP_LB="$LB2_IP"
### IPs ###
PUB_SSH_ONLY="122.xx.yy.zz/29"
#### FILES #####
BLOCKED_IP_TDB=/root/.fw/blocked.ip.txt
SPOOFIP="127.0.0.0/8 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16 0.0.0.0/8 240.0.0.0/4 255.255.255.255/32 168.254.0.0/16 224.0.0.0/4 240.0.0.0/5 248.0.0.0/5 192.0.2.0/24"
BADIPS=$( [[ -f ${BLOCKED_IP_TDB} ]] && egrep -v "^#|^$" ${BLOCKED_IP_TDB})
### Interfaces ###
PUB_IF="eth0" # public interface
LO_IF="lo" # loopback
VPN_IF="eth1" # vpn / private net
### start firewall ###
echo "Setting LB1 $(hostname) Firewall..."
# DROP and close everything
$IPT -P INPUT DROP
$IPT -P OUTPUT DROP
$IPT -P FORWARD DROP
# Unlimited lo access
$IPT -A INPUT -i ${LO_IF} -j ACCEPT
$IPT -A OUTPUT -o ${LO_IF} -j ACCEPT
# Unlimited vpn / pnet access
$IPT -A INPUT -i ${VPN_IF} -j ACCEPT
$IPT -A OUTPUT -o ${VPN_IF} -j ACCEPT
# Drop sync
$IPT -A INPUT -i ${PUB_IF} -p tcp ! --syn -m state --state NEW -j DROP
# Drop Fragments
$IPT -A INPUT -i ${PUB_IF} -f -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL FIN,URG,PSH -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL ALL -j DROP
# Drop NULL packets
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " NULL Packets "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
# Drop XMAS
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " XMAS Packets "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
# Drop FIN packet scans
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " Fin Packets Scan "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP
# Log and get rid of broadcast / multicast and invalid
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j LOG --log-prefix " Broadcast "
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j DROP
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j LOG --log-prefix " Multicast "
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j DROP
$IPT -A INPUT -i ${PUB_IF} -m state --state INVALID -j LOG --log-prefix " Invalid "
$IPT -A INPUT -i ${PUB_IF} -m state --state INVALID -j DROP
# Log and block spoofed ips
$IPT -N spooflist
for ipblock in $SPOOFIP
do
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j LOG --log-prefix " SPOOF List Block "
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j DROP
done
$IPT -I INPUT -j spooflist
$IPT -I OUTPUT -j spooflist
$IPT -I FORWARD -j spooflist
# Allow ssh only from selected public ips
for ip in ${PUB_SSH_ONLY}
do
$IPT -A INPUT -i ${PUB_IF} -s ${ip} -p tcp -d ${SERVER_IP} --destination-port 22 -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -d ${ip} -p tcp -s ${SERVER_IP} --sport 22 -j ACCEPT
done
# allow incoming ICMP ping pong stuff
$IPT -A INPUT -i ${PUB_IF} -p icmp --icmp-type 8 -s 0/0 -m state --state NEW,ESTABLISHED,RELATED -m limit --limit 30/sec -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p icmp --icmp-type 0 -d 0/0 -m state --state ESTABLISHED,RELATED -j ACCEPT
# allow incoming HTTP port 80
$IPT -A INPUT -i ${PUB_IF} -p tcp -s 0/0 --sport 1024:65535 --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --sport 80 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
# allow outgoing ntp
$IPT -A OUTPUT -o ${PUB_IF} -p udp --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p udp --sport 123 -m state --state ESTABLISHED -j ACCEPT
# allow outgoing smtp
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT
### add your other rules here ####
#######################
# drop and log everything else
$IPT -A INPUT -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " DEFAULT DROP "
$IPT -A INPUT -j DROP
exit 0
#7. Control Buffer Overflow Attacks
Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in nginx we can set buffer size limitations for all clients. This can be done through the Nginx configuration file using the following directives:
client_body_buffer_size – Use this directive to specify the client request body buffer size. The default value is 8k or 16k but it is recommended to set this as low as 1k as follows: client_body_buffer_size 1k
client_header_buffer_size – Use this directive to specify the header buffer size for the client request header. A buffer size of 1k is adequate for the majority of requests.
client_max_body_size – Use this directive to specify the maximum accepted body size for a client request. A 1k directive should be sufficient, however this needs to be increased if you are receiving file uploads via the POST method.
large_client_header_buffers – Use this directive to specify the maximum number and size of buffers to be used to read large client request headers. A large_client_header_buffers 2 1k directive sets the maximum number of buffers to 2, each with a maximum size of 1k. This directive will accept 2kB data URI.
You also need to control timeouts to improve server performance and cut clients. Edit it as follows:
client_body_timeout 10; – Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408). The default is 60.
client_header_timeout 10; – Directive assigns timeout with reading of the title of the request of client. The timeout is set only if a header is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408).
keepalive_timeout 5 5; – The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time. The optional second parameter assigns the time value in the header Keep-Alive: timeout=time of the response. This header can convince some browsers to close the connection, so that the server does not have to. Without this parameter, nginx does not send a Keep-Alive header (though this is not what makes a connection “keep-alive”).
send_timeout 10; – Directive assigns response timeout to client. Timeout is established not on entire transfer of answer, but only between two operations of reading, if after this time client will take nothing, then nginx is shutting down the connection.
#8. Control simultaneous connections
You can use NginxHttpLimitZone module to limit the number of simultaneous connections for the assigned session or as a special case, from one IP address. Edit nginx.conf:
# Directive describes the zone, in which the session states are stored i.e. store in slimits.
# 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session
limit_zone slimits $binary_remote_addr 5m;
# Control maximum number of simultaneous connections for one session i.e.
# restricts the amount of connections from a single ip address
limit_conn slimits 5;
The above will limits remote clients to no more than 5 concurrently “open” connections per remote ip address.
#9. Allow access to our domain only
If bot is just making random server scan for all domains, just deny it. You must only allow configured virtual domain or reverse proxy requests. You don’t want to display request using an IP address:
# Only requests to our Host are allowed i.e. hexadix.com, static.hexadix.com, www.hexadix.com
if ($host !~ ^(hexadix.com|static.hexadix.com|www.hexadix.com)$ ) {
return 444;
}
#10. Disable any unwanted HTTP methods
It is suggested to disable any HTTP methods which are not going to be utilized and which are not required to be implemented on the web server. The below condition, which is added under the ‘server’ section in the Nginx configuration file will only allow GET, HEAD, and POST methods and will filter out methods such as DELETE and TRACE by issuing a 444 No Response status code.
if ($request_method !~ ^(GET|HEAD|POST)$ )
{
return 444;
}
#11. Deny certain User-Agents
You can easily block user-agents i.e. scanners, bots, and spammers who may be abusing your server.
# Block some robots
if ($http_user_agent ~* msnbot|scrapbot) {
return 403;
}
#12. Block referral spam
Referer spam is dangerous. It can harm your SEO ranking via web-logs (if published) as referer field refer to their spammy site. You can block access to referer spammers with these lines.
# Deny certain Referers
if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) )
{
# return 404;
return 403;
}
#13. Stop image hot-linking
Image or HTML hotlinking means someone makes a link to your site to one of your images, but displays it on their own site. The end result you will end up paying for bandwidth bills and make the content look like part of the hijacker’s site. This is usually done on forums and blogs. I strongly suggest you block and stop image hotlinking at your server level itself.
# Stop deep linking or hot linking
location /images/ {
valid_referers none blocked www.example.com example.com;
if ($invalid_referer) {
return 403;
}
}
Another example with link to banned image:
valid_referers blocked www.example.com example.com;
if ($invalid_referer) {
rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last
}
See also: HowTo: Use nginx map to block image hotlinking. This is useful if you want to block tons of domains.
#14. Directory restrictions
You can set access control for a specified directory. All web directories should be configured on a case-by-case basis, allowing access only where needed.
Limiting Access By Ip Address
You can limit access to directory by ip address to /docs/ directory:
location /docs/ {
# block one workstation
deny 192.168.1.1;
# allow anyone in 192.168.1.0/24
allow 192.168.1.0/24;
# drop rest of the world
deny all;
}
Password Protect The Directory
First create the password file and add a user called vivek:
PHP is one of the popular server side scripting language. Edit /etc/php.ini as follows:
# Disallow dangerous functions
disable_functions = phpinfo, system, mail, exec
## Try to limit resources ##
# Maximum execution time of each script, in seconds
max_execution_time = 30
# Maximum amount of time each script may spend parsing request data
max_input_time = 60
# Maximum amount of memory a script may consume (8MB)
memory_limit = 8M
# Maximum size of POST data that PHP will accept.
post_max_size = 8M
# Whether to allow HTTP file uploads.
file_uploads = Off
# Maximum allowed size for uploaded files.
upload_max_filesize = 2M
# Do not expose PHP error messages to external users
display_errors = Off
# Turn on safe mode
safe_mode = On
# Only allow access to executables in isolated directory
safe_mode_exec_dir = php-required-executables-path
# Limit external access to PHP environment
safe_mode_allowed_env_vars = PHP_
# Restrict PHP information leakage
expose_php = Off
# Log all errors
log_errors = On
# Do not register globals for input data
register_globals = Off
# Minimize allowable PHP post size
post_max_size = 1K
# Ensure PHP redirects appropriately
cgi.force_redirect = 0
# Disallow uploading unless necessary
file_uploads = Off
# Enable SQL safe mode
sql.safe_mode = On
# Avoid Opening remote files
allow_url_fopen = Off
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ \.php$ {
# Zero-day exploit defense.
# http://forum.nginx.org/read.php?2,88845,page=3
# Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
# Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine. And then cross your fingers that you won't get hacked.
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_intercept_errors on;
fastcgi_pass php;
}
#17. Run Nginx In A Chroot Jail (Containers) If Possible
Putting nginx in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. You can use traditional chroot kind of setup with nginx. If possible use FreeBSD jails, XEN, or OpenVZ virtualization which uses the concept of containers.
#18. Limits connections per IP at the firewall level
A webserver must keep an eye on connections and limit connections per second. This is serving 101. Both pf and iptables can throttle end users before accessing your nginx server.
Linux Iptables: Throttle Nginx Connections Per Second
The following example will drop incoming connections if IP make more than 15 connection attempts to port 80 within 60 seconds:
$ /sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
$ /sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 15 -j DROP
$ service iptables save
BSD PF: Throttle Nginx Connections Per Second
Edit your /etc/pf.conf and update it as follows. The following will limits the maximum number of connections per source to 100. 15/5 specifies the number of connections per second or span of seconds i.e. rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.
webserver_ip="202.54.1.1"
table persist
block in quick from
pass in on $ext_if proto tcp to $webserver_ip port www flags S/SA keep state (max-src-conn 100, max-src-conn-rate 15/5, overload flush)
Please adjust all values as per your requirements and traffic (browsers may open multiple connections to your site). See also:
#19. Configure operating system to protect web server
Turn on SELinux as described above. Set correct permissions on /nginx document root. The nginx runs as a user named nginx. However, the files in the DocumentRoot (/nginx or /usr/local/nginx/html) should not be owned or writable by that user. To find files with wrong permissions, use:
Pass -delete option to find command and it will get rid of those files too.
#20. Restrict outgoing nginx connections
The crackers will download file locally on your server using tools such as wget. Use iptables to block outgoing connections from nginx user. The ipt_owner module attempts to match various characteristics of the packet creator, for locally generated packets. It is only valid in the OUTPUT chain. In this example, allow vivek user to connect outside using port 80 (useful for RHN access or to grab CentOS updates via repos):
$ /sbin/iptables -A OUTPUT -o eth0 -m owner --uid-owner vivek -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
Add above rule to your iptables based shell script. Do not allow nginx web server user to connect outside.
#21. Make use of ModSecurity
ModSecurity is an open-source module that works as a web application firewall. Different functionalities include filtering, server identity masking, and null byte attack prevention. Real-time traffic monitoring is also allowed through this module. Therefore it is recommended to follow the ModSecurity manual to install the mod_security module in order to strengthen your security options.
#22. Set up and configure nginx access and error logs
Nginx access and error logs are enabled by default and are located at logs/error.log for error logs and at logs/access.log for access logs. The error_log directive in the nginx configuration file will allow you to set the directory where the error logs will be saved as well as specify which logs will be recorded according to their severity level. For example, a ‘crit’ severity level will log important problems that need to be addressed and any other issues which have a higher severity level than ‘crit’. To set the severity level of error logs to ‘crit’ the error_log directive needs to be set up as follows – error_log logs/error.log crit;. A complete list of error_log severity levels can be found in the official nginx documentation available here.
Alternatively, the access_log directive can be modified from the nginx configuration file to specify a location where the access logs will be saved (other than the default location). Also the log_format directive can be used to configure the format of the logged messages as explained here.
#23. Monitor nginx access and error logs
Continuous monitoring and management of the nginx log files will give a better understanding of requests made to your web server and also list any errors that were encountered. This will help to expose any attempted attacks made against the server as well as identify any optimizations that need to be carried out to improve the server’s performance. Log management tools, such as logrotate, can be used to rotate and compress old logs in order to free up disk space. Also the ngx_http_stub_status_module module provides access to basic status information, and nginx Plus, the commercial version of nginx, provides real-time activity monitoring of traffic, load and other performance metrics.
Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not.
Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not. # grep "/login.php??" /usr/local/nginx/logs/access_log # grep "...etc/passwd" /usr/local/nginx/logs/access_log # egrep -i "denied|error|warn" /usr/local/nginx/logs/error_log The auditd service is provided for system auditing. Turn it on to audit service SELinux events, authetication events, file modifications, account modification and so on. As usual disable all services and follow our “Linux Server Hardening” security tips.
#24. Configure Nginx to include an X-Frame-Options header
The X-Frame-Options HTTP response header is normally used to indicate if a browser should be allowed to render a page in a <frame> or an <iframe>. This could prevent clickjacking attacks and therefore it is recommended to enable the Nginx server to include the X-Frame-Options header. In order to do so the following parameter must be added to the nginx configuration file under the ‘server’ section – add_header X-Frame-Options "SAMEORIGIN";
server {
listen 8887;
server_name localhost;
add_header X-Frame-Options "SAMEORIGIN";
location / {
root html;
index index.html; index.htm;
}
}
#25. X-XSS Protection
Inject HTTP Header with X-XSS protection to mitigate Cross-Site scripting attack.
Modify default.conf or ssl.conf file to add following
add_header X-XSS-Protection "1; mode=block";
Save the configuration file and restart nginx. You can use Check Headers tool to verify after implementation.
#26. Keep your nginx up to date
As with any other server software, it is recommended that you always update your Nginx server to the latest stable version. These often contain fixes for vulnerabilities identified in previous versions, such as the directory traversal vulnerability that existed in Nginx versions prior to 0.7.63, and 0.8.x before 0.8.17. These updates frequently include new security features and improvements. Nginx security advisories can be found here and news about latest updates can be found here.
Have you ever faced the problem when your transaction log grows too big and your disk is full?
You cannot shrink the transaction log file unless you do transaction log backup first. You cannot do the transaction log backup because your disk doesn’t have enough free space. The transaction log is getting bigger every minute. The clock is ticking. What will you do?
Normally, SQL Server would make you backup the transaction log first, only then would allow you to shrink transaction log file. If you want to look more into this process, take a look at this blog.
However in this case, there’s not enough space to write the log backup file. So that wouldn’t work.
After spending a lot of time googling, I ended up this solution
BACKUP LOG DBNAME TO DISK ='NUL'
where DBNAME should be changed to the database name that we are dealing with.
What this piece of code does is that it backups the transaction log to ‘NUL’ file, which means the backup process will be executed but there’s no writing to disk. After this process completes, SQL Server will think that all the transaction log has been backed up to disk and allow us to shrink the transaction log file.
This solution is perfect, except for one thing: during this process, all transaction log will be thrown away, which means if we can’t make a full backup of the data after this process, we might face a data loss if the server crash.
Therefore, use this solution at your own risk. And remember to do a full backup of the database right after this process, just to be sure. After the full backup of the database, you’re safe.
As you may already know, SQL normally uses cache to speed up queries that are often executed. This is no doubt a very cool feature of SQL Server.
However, sometimes when you are tuning your query, SQL Server caches your query or part of it so you can’t be sure if your query is really fast or is it just fast because SQL Server fetches the result from cache. In this case, you may want to clear SQL Server cache so that the query result is fetched directly from database.
Fortunately, SQL Server provides us with an undocumented stored procedure to do the job.
To clear the cache, run the command as below:
dbcc dropcleanbuffers
That’s it. Now execute your query again to see how it performs 😀
Dealing with SQL Server’s transaction log can be very troublesome.
In this post, we are going to take a look at some common use cases when dealing with transaction log in SQL Server.
Hopes it helps! 😀
1. View database size and log size
SELECT
DB_NAME(db.database_id) DatabaseName,
(CAST(mfrows.RowSize AS FLOAT)*8)/1024 RowSizeMB,
(CAST(mflog.LogSize AS FLOAT)*8)/1024 LogSizeMB,
(CAST(mfstream.StreamSize AS FLOAT)*8)/1024 StreamSizeMB,
(CAST(mftext.TextIndexSize AS FLOAT)*8)/1024 TextIndexSizeMB
FROM sys.databases db
LEFT JOIN (SELECT database_id, SUM(size) RowSize FROM sys.master_files WHERE type = 0 GROUP BY database_id, type) mfrows ON mfrows.database_id = db.database_id
LEFT JOIN (SELECT database_id, SUM(size) LogSize FROM sys.master_files WHERE type = 1 GROUP BY database_id, type) mflog ON mflog.database_id = db.database_id
LEFT JOIN (SELECT database_id, SUM(size) StreamSize FROM sys.master_files WHERE type = 2 GROUP BY database_id, type) mfstream ON mfstream.database_id = db.database_id
LEFT JOIN (SELECT database_id, SUM(size) TextIndexSize FROM sys.master_files WHERE type = 4 GROUP BY database_id, type) mftext ON mftext.database_id = db.database_id
or
with fs
as
(
select database_id, type, size * 8.0 / 1024 size
from sys.master_files
)
select
name,
(select sum(size) from fs where type = 0 and fs.database_id = db.database_id) DataFileSizeMB,
(select sum(size) from fs where type = 1 and fs.database_id = db.database_id) LogFileSizeMB
from sys.databases db
2. View % of log file that really contains data
DBCC SQLPERF(LOGSPACE);
GO
3. View the reason that log file cannot be reused
select name, log_reuse_wait_desc from sys.databases
SELECT
t.NAME AS TableName,
p.rows AS RowCounts,
SUM(a.total_pages) * 8 AS TotalSpaceKB,
SUM(a.used_pages) * 8 AS UsedSpaceKB,
(SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM
sys.tables t
INNER JOIN
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
WHERE
t.NAME NOT LIKE 'dt%'
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY
t.Name, p.Rows
ORDER BY
t.Name
SQL Cursor is very handy to execute a loop in SQL, especially inside of a stored procedure.
Let’s look at a simple example to understand how it works.
USE AdventureWorks
GO;
DECLARE @ProductID INT
DECLARE @getProductID CURSOR
SET @getProductID = CURSOR FOR
SELECT ProductID
FROM Production.Product
OPEN @getProductID
FETCH NEXT FROM @getProductID INTO @ProductID
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT @ProductID
FETCH NEXT FROM @getProductID INTO @ProductID
END
CLOSE @getProductID
DEALLOCATE @getProductID
GO;