Category: Centos

How to compile PHP and run it as a CGI binary

How to compile php as a cgi binary
  • This blog post was written using RedHat enterprise server, however the principles apply to just about any linux distro out there, it should work for them all.
  1. Generally what I would do is check the OS repository and see what is the next upgraded version of php its going to do, when finally you decide to do a server wide upgrade of php, and then I would go and download the source files of that version. So in this case we are going to do php 5.3.17
  2. Download from http://www.php.net/releases/
  3. Log into your server, and created a directory, then cd into it, next run the following below.

wget http://www.php.net/get/php-5.3.17.tar.gz/from/a/mirror

  1. Next you want to untar the file tar –zxvf <filename you downloaded>
  2. Next we need to get the configure flags that php is currently using, the easiest way to get this is to find a domain that has php running and setup a phpinfo.php that contains the following

<?php phpinfo() ?>

Save that file and then view it through your browser http://domain.com/phpinfo.php

You should see a php info page. If you do not see it, it probably means your owner permissions are incorrect.

Example

-rw-r–r– 1 root    root         19 Nov  7 14:32 phpinfo.php (incorrect)

-rw-r–r– 1 tailor  tailor         19 Nov  7 14:32 phpinfo.php (correct)

6.   So at the top of that phpinfo page you should see a section called “Configure Command”  Looks like what is below here.

 

Configure Command ‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’

 

You want to copy and inside the ‘./configure—

From above example

‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’

7. Now you will need to make some modifications to this. I am running my php as a cgi already, but if you are running it as a module you will see in the configure flags that mysql is disabled even though it is enabled, the reason why this is because when php is run as a module, you are also loading the various flags as module extenstions to run with php and the configure flags will not reflect that on phpinfo page.

So these will be the primary flags you need to ensure are working

Note- these flags I am using are for a 64bit OS, the flags are different for a 32bit OS

This is my Example of flag functions in php I wanted to work. I copied this into a text editor and made the changes I needed accordingly, outlined below.

‘./configure’ ‘-enable-yum’ ‘–build=x86_64-redhat-linux-gnu’ ‘–host=x86_64-redhat-linux-gnu’ ‘–target=x86_64-redhat-linux-gnu’ ‘–program-prefix=’ ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib’ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–cache-file=../config.cache’ ‘–with-config-file-path=/etc’ ‘–without-config-file-scan-dir’ ‘–enable-force-cgi-redirect’ ‘–disable-debug’ ‘–enable-pic’ ‘–disable-rpath’ ‘–enable-inline-optimization’ ‘–with-bz2’ ‘–with-curl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–enable-gd-native-ttf’ ‘–with-gettext’ ‘–with-ncurses=shared’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg-dir=/usr’ ‘–with-png’ ‘–with-xml’ ‘–with-libxml-dir=/usr”–with-expat-dir=/usr’ ‘–with-dom=shared,/usr’ ‘–with-dom-xslt=/usr’ ‘–with-dom-exslt=/usr’ ‘–with-xmlrpc=shared’ ‘–with-pcre-regex=/usr/include’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–enable-bcmath’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-magic-quotes’ ‘–enable-sockets’ ‘–enable-sysvsem’ ‘–enable-sysvshm’ ‘–enable-track-vars’ ‘–enable-trans-sid’ ‘–enable-yp’ ‘–enable-wddx’ ‘–with-pear=/usr/share/pear’ ‘–with-imap=shared’ ‘–with-imap-ssl’ ‘–with-kerberos’  ‘–with-mysql=/usr’ ‘–with-unixODBC=shared,/usr’ ‘–enable-memory-limit’ ‘–enable-shmop’ ‘–enable-calendar’ ‘–enable-mbstring’ ‘–enable-mbstr-enc-trans’ ‘–enable-mbregex’ ‘–with-mime-magic=/usr/share/file/magic.mime’ ‘–enable-dba’ ‘–enable-db4’ ‘–enable-gdbm’ ‘–enable-static’ ‘–with-openssl’

  • You will notice the following key thing
  • You want to remove “–with-apxs2=/usr/sbin/apxs” – you can only compile one SAPI + cli at the same time. You can’t both compile CGI and the apache2 SAPI at the same time. http://bugs.php.net/bug.php?id=30682&edit=1
  • You want to ensure that “–without-mysql” is changed to “–with-mysql”
  • You want to ensure that you have ‘–enable-static’ (this will enable the static libraries)
  • Enable any other flags you may want to use.

8. Create a file called config.sh on the server in the directory where you untarred the source php files

9. Copy the configure flags all on one line in that files and save it

10. Now run “sh config.sh” (This is will test the configure flags to see if the server can support them and it will fail on any that need dependancies installed. When you get an error on a package you generally just want to search for the development packages of the failed packages. Use Yum to install them and then run “sh config.sh” again and keep going until it finishes properly. I have listed a few common packages people run into below.

Note: sometimes you will run into path issues, like it cant find something in /usr/bin etc, what I do is do a locate of the file its looking for, usually they are .so files and simply create a simlink to where its attempting for the missing file so the configure test can complete.

yum install  gcc4-c++
yum install  gdbm-devel
yum install  libjpeg-devel
yum install  libpng-devel
yum install  freetype-devel
yum install  gmp-devel
yum install  libc-client-devel
yum install  openldap-devel
yum install  mysql-devel
yum install  ncurses-devel
yum install  unixODBC-devel
yum install  postgresql-devel
yum install  net-snmp-devel
yum install  bzip2-devel
yum install  curl-devel

11. Once you have completed the test of the configure flags you, it will generate files at the end.

12. Now you want to run “Make” <–(DO NOT DO “MAKE INSTALL”) This will take some time to complete, if it completes successfully, you will have created a cgi binary file that will be located in “sapi/cgi/php-cgi”

Phase 2 Running Your new php cgi on your vhost

  1. Copy your new php-cgi  binary to the cgi-bin directory or script-alias directory within your domain or vhost. This is usually “/home/username/www/cgi-bin”

Fix the permissions on the new cgi bin so that its running as the apache user or suexec user your are using for your vhost.

Ie. -rw-r–r– 1 apache:apache        19 Nov  7 14:32 php-cgi(correct)
-rw-r–r– 1 tailor:tailor                19 Nov  7 14:32 php-cgi(correct)

  1. Next ensure the file has executable permissions “chmod +x php-cgi
  2. Now go back into the document root folder of the domain or vhost and create a .htaccess file with the following lines below and save the file.

AddHandler php-cgi  .php  .htm
Action php-cgi /cgi-bin/php-cgi

  1. As soon as you do the above the site will be using the new cgi php. If you reload the phpinfo.php page now you should see the server API read as:
Server API CGI/FastCGI

 

Note: if you want to disabled the cgi php simply comment out the lines in the .htaccess file. The cool thing about this is you don’t need to reload apache for php changes anymore, since its running as a cgi. You should check the phpinfo.php page and ensure that all the flags you wanted are listed on that page, if they are not, you either missed the flag in your configure or did not compile fully.

How to upgrade wordpress in a production environment

How to upgrade wordpress in a production environment

So I am writing this because I’m sure some people have or will run into this issue. You are running an older version of wordpress and have delayed the upgrade, in addition to delaying the upgrade you delayed the upgrade of php from 5.1.6 because your running redhat 5.

The problem with this type of upgrade is that, it’s not just a matter of upgrading wordpress, you will need to upgrade php to 5.2.x in order to use the latest version of wordpress. This can be a problem, since most people decide to run php as module and not as a cgi.

This means that you will have to upgrade php globally on the server, which could cause issues if you have not tested it and you may have to roll back if it doesn’t work. Please keep in mind you will need to do your own testing for your environments. However, if you want a solution that can give you virtually no downtime, then read on, as this is what I did when I was faced with the situation, which worked flawlessly.

So what you can do is compile your own version of php and run it as a cgi just for that one vhost or domain that you want to test, do the upgrade inside wordpress or follow the upgrade instructions that wordpress gives you. I am going to outline how to do this in this blog post.

Phase  1  Compile your own php as CGI

  1. Generally what I would do is check the OS repository and see what is the next upgraded version of php its going to do, when finally you decide to do a server wide upgrade of php, and then I would go and download the source files of that version. So in this case we are going to do php 5.3.17
  2. Download from http://www.php.net/releases/
  3. Log into your server and created a directory and cd into it then run the wget below.

wget http://www.php.net/get/php-5.3.17.tar.gz/from/a/mirror

  1. Next you want to untar the file tar –zxvf <filename you downloaded>
  2. Next we need to get the configure flags that php is currently using, the easiest way to get this is to find a domain that has php running and create a phpinfo.php file that contains the following

<?php phpinfo() ?>

Save that file and then view it through your browser http://domain.com/phpinfo.php

You should see a php info page. If you do not see it, it probably means your owner permissions are incorrect.

Example

-rw-r–r– 1 root    root         19 Nov  7 14:32 phpinfo.php (incorrect)

-rw-r–r– 1 tailor  tailor         19 Nov  7 14:32 phpinfo.php (correct)

6.   So at the top of that phpinfo page you should see a section called “Configure Command”  Looks like what is below here.

 

Configure Command ‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’

 

You want to copy and inside the ‘./configure—

From above example

‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’

7. Now you will need to make some modifications to this. I am running my php as a cgi already, but if you are running it as a module you will see in the configure flags that mysql is disabled even though it is enabled, the reason why this is because when php is run as a module, you are also loading the various flags as module extenstions to run with php and the configure flags will not reflect that on phpinfo page.

So these will be the primary flags you need to ensure are working

Note- these flags I am using are for a 64bit OS, the flags are different for a 32bit OS

This is my Example of flag functions in php I wanted to work. I copied this into a text editor and made the changes I needed accordingly, outlined below.

‘./configure’ ‘-enable-yum’ ‘–build=x86_64-redhat-linux-gnu’ ‘–host=x86_64-redhat-linux-gnu’ ‘–target=x86_64-redhat-linux-gnu’ ‘–program-prefix=’ ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib’ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–cache-file=../config.cache’ ‘–with-config-file-path=/etc’ ‘–without-config-file-scan-dir’ ‘–enable-force-cgi-redirect’ ‘–disable-debug’ ‘–enable-pic’ ‘–disable-rpath’ ‘–enable-inline-optimization’ ‘–with-bz2’ ‘–with-curl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–enable-gd-native-ttf’ ‘–with-gettext’ ‘–with-ncurses=shared’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg-dir=/usr’ ‘–with-png’ ‘–with-xml’ ‘–with-libxml-dir=/usr”–with-expat-dir=/usr’ ‘–with-dom=shared,/usr’ ‘–with-dom-xslt=/usr’ ‘–with-dom-exslt=/usr’ ‘–with-xmlrpc=shared’ ‘–with-pcre-regex=/usr/include’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–enable-bcmath’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-magic-quotes’ ‘–enable-sockets’ ‘–enable-sysvsem’ ‘–enable-sysvshm’ ‘–enable-track-vars’ ‘–enable-trans-sid’ ‘–enable-yp’ ‘–enable-wddx’ ‘–with-pear=/usr/share/pear’ ‘–with-imap=shared’ ‘–with-imap-ssl’ ‘–with-kerberos’  ‘–with-mysql=/usr’ ‘–with-unixODBC=shared,/usr’ ‘–enable-memory-limit’ ‘–enable-shmop’ ‘–enable-calendar’ ‘–enable-mbstring’ ‘–enable-mbstr-enc-trans’ ‘–enable-mbregex’ ‘–with-mime-magic=/usr/share/file/magic.mime’ ‘–enable-dba’ ‘–enable-db4’ ‘–enable-gdbm’ ‘–enable-static’ ‘–with-openssl’

  • You will notice the following key thing
  • You want to remove “–with-apxs2=/usr/sbin/apxs” – you can only compile one SAPI + cli at the same time. You can’t both compile CGI and the apache2 SAPI at the same time. http://bugs.php.net/bug.php?id=30682&edit=1
  • You want to ensure that “–without-mysql” is changed to “–with-mysql”
  • You want to ensure that you have ‘–enable-static’ (this will enable the static libraries)
  • Enable any other flags you may want to use.

8. Create a file called config.sh on the server in the directory where you untarred the source php files

9. Copy the configure flags all on one line in that files and save it

10. Now run “sh config.sh” (This is will test the configure flags to see if the server can support them and it will fail on any that need dependancies installed. When you get an error on a package you generally just want to search for the development packages of the failed packages. Use Yum to install them and then run “sh config.sh” again and keep going until it finishes properly. I have listed a few common packages people run into below.

Note: sometimes you will run into path issues, like it cant find something in /usr/bin etc, what I do is do a locate of the file its looking for, usually they are .so files and simply create a simlink to where its attempting for the missing file so the configure test can complete.

yum install  gcc4-c++
yum install  gdbm-devel
yum install  libjpeg-devel
yum install  libpng-devel
yum install  freetype-devel
yum install  gmp-devel
yum install  libc-client-devel
yum install  openldap-devel
yum install  mysql-devel
yum install  ncurses-devel
yum install  unixODBC-devel
yum install  postgresql-devel
yum install  net-snmp-devel
yum install  bzip2-devel
yum install  curl-devel

11. Once you have completed the test of the configure flags you, it will generate files at the end.

12. Now you want to run “Make” <–(DO NOT DO “MAKE INSTALL”) This will take some time to complete, if it completes successfully, you will have created a cgi binary file that will be located in “sapi/cgi/php-cgi”

Phase 2 Running Your new php cgi on your vhost

  1. Copy your new php-cgi  binary to the cgi-bin directory or script-alias directory within your domain or vhost. This is usually “/home/username/www/cgi-bin”

Fix the permissions on the new cgi bin so that its running as the apache user or suexec user your are using for your vhost.

 

Ie. -rw-r–r– 1 apache:apache        19 Nov  7 14:32 php-cgi(correct)
-rw-r–r– 1 tailor:tailor              19 Nov  7 14:32 php-cgi(correct)

 

  1. Next ensure the file has executable permissions “chmod +x php-cgi
  2. Now go back into the document root folder of the domain or vhost and create a .htaccess file with the following lines below and save the file.

AddHandler php-cgi  .php  .htm
Action php-cgi /cgi-bin/php-cgi

  1. As soon as you do the above the site will be using the new cgi php. If you reload the phpinfo.php page now you should see the server API read as:
Server API CGI/FastCGI

 

Note: if you want to disabled the cgi php simply comment out the lines in the .htaccess file. The cool thing about this is you don’t need to reload apache for php changes anymore, since its running as a cgi. You should check the phpinfo.php page and ensure that all the flags you wanted are listed on that page, if they are not, you either missed the flag in your configure or did not compile fully.

Phase 3. Upgrading your wordpress

  1. Now that you php is running under the vhost for just this wordpress site, you can log into wordpress and see if it loads properly, it should load without issues. You can either do an update from inside wordpress or do the safer way in my humble opinion and do it on the server as indicated below.
  2. Download the latest wordpress files to a directory inside the document root of the vhost.
  3. Backup your databaseBackup ALL your WordPress files in your WordPress directory. Don’t forget your .htaccess file.
  4. Once you have deleted the necessary files go into the directory where you untarred the new files and delete the files and directories that don’t want to overwrite in the document root (ie wp-content, wp-images, wp-includes/languages etc.)
  5. Then run “cp –r * ../” This will move everything from the new wordpress directory to the live document root directory.
  6. You should now be able to log into wordpress and see that its upgraded and functioning correctly.
  1. Verify the backups you created are there and usable. This is essential.
  2. Deactivate ALL your Plugins.
  3. Ensure first four steps are completed. Do not attempt the upgrade unless you have completed the first four steps.
  4. Delete the old WordPress files on your site, but DO NOT DELETE
    1. wp-config.php file;
    2. wp-content folder; Special Exception: the wp-content/cache and the wp-content/plugins/widgets folders should be deleted.
    3. wp-images folder;
    4. wp-includes/languages/ folder–if you are using a language file do not delete that folder;
    5. .htaccess file–if you have added custom rules to your .htaccess, do not delete it;
    6. robots.txt file–if your blog lives in the root of your site (ie. the blog is the site) and you have created such a file, do not delete it.

 

So now you can leave the wordpress running on the cgi php until, you pick a date to do a global php server update, at which time you can comment out the .htaccess files lines and it will revert back to using the php global version as a module.

 

Hope this helped you if you have questions email nick@nicktailor.com

 

 

How to install Gnome 3 on Ubuntu 12.04 LTS

How to install Gnome 3 on Ubuntu 12.04 LTS

If you are reading this article, chances are that you have tried the Unity interface on Ubuntu. Although Canonical has done a great job with the development of Unity, some of us still prefer to use Gnome as a default GUI. In addition, the Gnome team has also done an excellent job improving Gnome and released this as Gnome 3. Since Gnome 3 comes with both the classic (similar to Gnome 2) and the new Gnome 3 interface, I decided to focus on installing Gnome 3 in this article.

Installing Gnome 3

Before we continue, it is worth mentioning that there is a gnome package in the defaultUbuntu repository for Gnome, however from what I understood from several articles this version is outdated and does not include all the beauty that is included in the latest Gnome 3 release. So you may want to skip installing the default package from the repository.

The good news is that installing the latest Gnome 3 on Ubuntu 12.04 is extremely easy. Just copy-paste the following lines for the latest release from the Gnome team into a terminal (type Ctrl-Alt T to open a terminal window):

sudo add-apt-repository ppa:gnome3-team/gnome3
sudo apt-get update
sudo apt-get install gnome-shell

Now be sure to reboot your computer and when you are prompted with your login screen you have the following additional options (click on the little Ubuntu icon next to your login name):

 

I recommend using the first option, Gnome. However if you are interested in going back to a familiar environment, feel free to choose one of the two Gnome Classic options. You can log in and log out to try the different versions.

Gnome 3 Shell Extensions

One of the great new features of Gnome 3 is the possibility to add “shell extensions”. These are small user interface elements which can improve the overall user experience.

 

To install a shell extension visit the Gnome Extensions website with your browser (the default Firefox works fine for this) and install extensions by switching the “ON/OFF” button to“ON” (you can find these buttons on the individual extension pages, in the left upper corner).

You may also want to consider installing the Gnome Tweak Tool which will give you greater control over your shell extensions and several other Gnome settings. You can install this tool directly from the Ubuntu Software Repository, or by copy-pasting the following lines into a terminal:

sudo apt-get install gnome-tweak-tool

You can now find this tweak tool by searching for “Advanced Settings” in your applications or in System Tools menu.

 

Recommended Shell Extensions

Experiment and try out some shell extensions. Personally I recommend to at least try out activating/installing the following shell extensions:

Alternatively, if you prefer to install a small collection of popular shell extensions in one go (including most of the listed above) you can copy-paste the following lines in a terminal:

sudo add-apt-repository ppa:ricotz/testing
sudo apt-get update
sudo apt-get install gnome-shell-extensions-common

And once you have finished installing extensions, visit the Installed Extensions pageon the Gnome Extensions website or the “Shell Extensions” option in the Gnome Tweak Tool. There you will be able to see, enable/disable and customize settings of the individual extensions from the collection.

An important note about using Gnome shell extensions: Unfortunately any installed shell extension will not automatically be updated when newer versions are released. You will need to manually remove and reinstall any shell extension which conflicts with future Gnome 3 or Ubuntu updates. This is something the Gnome team is aware of and (I hope) is working on fixing.

Getting Around In Gnome 3

As mentioned earlier in this article, there are a lot of exciting new features in Gnome 3. I decided to highlight the two features that have the most impact on my daily usage of Gnome.

Multiple Workspaces

One of the first things I noticed when I logged in was that there were only two workspaces in Gnome 3 (use the keyboard shortcut Ctrl-Alt Up/Down-arrows to navigate the workspaces). So my first impulse was to browse through a lot of different settings windows in the system settings and try to increase this number (I like working with four or more workspaces). However, I could not find where to change this anywhere. Only after watching this video I understood that this is not needed anymore as the number of active workspaces is dynamic to what you actually using. Watch the video below to understand what I mean.

 Searching For Apps / Switching Windows

Quickly accessing popular apps and opened windows is similar to how Unity does this, however the approach from the Gnome team allows you to have more screen space for the apps and windows you have open. In the video below Jason of the Gnome team explains you what I mean.

 

How to setup a mysql replication check on your slave

How to setup replication check on your mysql slave with email alerting

I decided to write this, because there are probably lots of people who have mysql replication setup in master –slave, and the only way they are able to identify if replication is broken, is by logging in to the slave and checking. I found this to be a pain and in efficient.

What this entails

  • This is a perl script which will run every hour via cron
  • It will send an email alert notifying you that replication is broken on the slave
  • Script is smart enough to know if mysql is simply stopped on the master.
  • Script also check to see if mysql is running or not
  1. Open a file under nano  -w /usr/sbin/replicationcheck.pl (either copy and paste below or download the link below and edit as needed, this goes on your slave mysql server)
    http://www.nicktailor.com/files/replicationcheck
  2. You need to ensure the file has +x permissions chmod +x /usr/sbin/replicationcheck.pl
  3. Create the following file ‘touch /root/repl_check/show_slave_status.txt’ (this file is used to pipe information to)
  4. Create the log file ‘touch /var/log/mysqlstopped.txt’ (this will be used to log the results
  5. Finally you will need to setup a cron to run this script. I ran mine every hour ‘crontab -e’ if your adding this to root
    0 * * * /usr/sbin/replicationcheck.pl (This runs every hour)
  6. Lastly, you can setup a bash script on master db which can ssh to your slave and output the results on your master using this script, so you dont need to log into the slave to use the script, if your lazy.

Explains the script 

#!/usr/bin/perl
use Sys::Hostname;
use POSIX;

$timestamp = strftime “%b%e %Y %H:%M:%S”, localtime;
$host = hostname;
$email_lock = “/root/email.lck”;
$mysql_socket = “/var/lib/mysql/mysql.sock”;
$show_slave_status = “/root/repl_check/show_slave_status.txt”;
$pword = “”; (You will need to add this to the mysql lines below if you have password)

 

# This checks to see if mysql socket exists. if it exists, means that mysql is running. if mysql’s not running, don’t need to run slave status check

sub check_mysql_socket

{

# Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock
if (-e $mysql_socket)
{
print “MySQL running, will proceed\n”;
return 1;
}

else

{

print “MySQL not running, will do nothing\n”;
return 0;

}

}

 

# so this is the server doesn’t repeatedly keep sending email alerts if the replication is broken. How it does it is by a email lock file. If an email is sent it creates the lock file and stops sending, if it doesn’t it will send. You can change this how you see fit, this is the way I learned so stuck with it. It’s a sub, so we can use it as a function variable later down the script.

sub check_email_lock

{

if (-e $email_lock)

{

print “email file exists\n”;
return 1;

}

else

{

print “no email file exists\n”;
return 0;

}

}

 

#So this section just basically continues from above by using the check email lock function and then sends the email if the lock file doesn’t exist and creates the lock file, you also define the email address you want to send here. It also logs the results to a file called “mysqlstopped.txt” if there is a problem.

sub stop_mysql

{

print “**Show Slave Status**\n”;
if (check_email_lock)

{

print “email lock exists, keep email lock, no email will be sent “;

}

else

{

system (“mail -s ‘mysql stopped because replication is broken $host’ nick\@nicktailor.com < /var/log/mysqlstopped.txt”);

system (“touch $email_lock”);
print “email sent, email lock created\n”;

}

 

}

print $timestamp . “\n”;

 

# if MySQL is running then it moves on to the next phase where it mines the information from mysql we need:

  • last io error
  • last sql errno
  • slave io running
  • slave sql running

 

if (check_mysql_socket)

{

system (“/usr/bin/mysql -Bse ‘show slave status\\G’ > $show_slave_status”);
$last_io_errno = `less $show_slave_status | grep Last_IO_Errno | /usr/bin/awk ‘{print \$2}’`;
$last_sql_errno = `less $show_slave_status | grep Last_SQL_Errno | /usr/bin/awk ‘{print \$2}’`;
$slave_io_running = `less $show_slave_status | grep Slave_IO_Running | /usr/bin/awk ‘{print \$2}’`;
$slave_sql_running = `less $show_slave_status | grep Slave_SQL_Running | /usr/bin/awk ‘{print \$2}’`;

 

# trim newline character
chomp($last_io_errno);
chomp($last_sql_errno);
chomp($slave_io_running);
chomp($slave_sql_running);

print “last io error is ” . $last_io_errno . “\n”;
print “last sql errno is ” . $last_sql_errno . “\n”;
print “slave io running is ” . $slave_io_running . “\n”;
print “slave sql running is ” . $slave_sql_running . “\n”;

#So this piece is here because if you stop mysql on the master, the result on the slave from “show slave status” is a very specific one. You will need to test yours to see if the results match the code here, and edit it according.
Basically its saying if last_io_errno is less than 0 and does not equal 2013 there is a problem, If last sql_ernno is less than 0 there is also problem. You get the idea, you can add as many circumstances you need. I found this to be the best combo which covered pretty much most scenarios .

if (($last_io_errno > 0) && ($last_io_errno != 2013))

{

&stop_mysql;

}

elsif ($last_sql_errno > 0)

{

&stop_mysql;

}

# if slave not running = Slave_IO_Running and Slave_SQL_Running are set to No

elsif (($slave_io_running eq “No”) && ($slave_sql_running eq “No”))

{

&stop_mysql;

}

 

else

{

if (check_email_lock)

{

system (“rm $email_lock”);

}

print “replication fine or master’s just down, mysql can keep going, removed lock file\n”;

}

}

 

else

{

print “#2 MySQL not running, will do nothing\n”;

}

 

print “\n#########################\n”;

If the script works you should see the following below if replication is working and no email will be sent to you

#########################

Oct 3 2012 00:13:12
MySQL running, will proceed
last io error is 0
last sql errno is 0
slave io running is Yes
slave sql running is Yes
no email file exists
replication fine or master’s just down, mysql can keep going, removed lock file

#########################

If there is a problem it will look something like:

##########################

Oct 2 2012 02:02:54
MySQL running, will proceed
last io error is 0
last sql errno is 0
slave io running is No
slave sql running is No
**Show Slave Status**
no email file exists
Null message body; hope that’s ok
email sent, email lock created

Hope this helped you 🙂

Cheers

Nick Tailor

 

How to setup Arpwatch across multiple vlans

How to setup Arpwatch across multiple vlans

  • Arpwatch is primarily used to avoid ip conflicts on your network
  • This will help avoid an accidental outages from occurring by the mac-address arping to another device in error due to a duplicate ip configuration on another device
  • This will also help track down a gateway theft, if there is an accidental theft of your gateway within your network by a compromised machine.
  • Arpwatch  keeps  track  for  ethernet/ip  address  pairings. It syslogs activity and reports certain changes via email.  Arpwatch uses  pcap(3) to listen for arp packets on a local ethernet interface.

Installing ArpWatch on Debian

Note-You will need to ensure that your vlans are trunked and might need to tag them depending on your setup, so that you arp requests packets from arpwatch are not dropped if they go to another switch.

  1. Now you can download the source and compile and do this, however debian sources already have it, so this is pretty easy to install. “apt-get install arpwatch”
  2. Create empty file for storing host information “touch /var/lib/arpwatch/arp.dat” if this file already exists move to the next setup
  3. You want to open up your /etc/arpwatch.conf and configure your interfaces for listening on which ever subnets you want it to check.

Note: Since eth0 on the arpwatch server is your primary interface. I used the second nic plugged into a tagged vlan so that my arpwatch server could send packets

Add these lines for email alerts

eth1 -a -m admin@nicktailor.com
eth1.1 -a -m admin@nicktailor.com
eth1.2 -a – -m admin@nicktailor.com

4. If you need to exclude a specific subnet for any reason. I had to do this because we had multiple physical servers that had unconfigured drac cards which had the same ip address configured, so when we implemented arpwatch on our public facing vlans, we got a lot of alerts because dracs. To get around it we used the added the following lines in /etc/arpwatch.conf

 

eth1 -a -z 192.168.0.0/255.255.0.0 -m admin@nicktailor.com
eth1.1 -a -z 192.168.0.0/255.255.0.0 -m admin@nicktailor.com
eth1.2 -a -z 192.168.0.0/255.255.0.0 -m admin@nicktailor.com

Note: Another way to do this is updating the startup script /etc/init.d/arpwatch, edit the line below as follows:

Additional Configuring

IFACE_OPTS=”-i ${IFACE} -f ${IFACE}.dat $2 -z 192.168.0.0/255.255.0.0″

  1. If you want to make config cleaner for the emails for instance you want to have multiple addresses emailed. Open up /etc/aliases 

Add the lines

arp-alert: nick@nicktailor.com, admin@nicktailor.com

2. Next go back into /etc/arpwatch.conf and edit the lines from step 3 as indicated below, this way you don’t have to keep updated the conf, if you want to added more emails addresses in future, just update your aliases file.

eth1 -a -z 192.168.0.0/255.255.0.0 -m arp-alert
eth1.1 -a -z 192.168.0.0/255.255.0.0 –m arp-alert
eth1.2 -a -z 192.168.0.0/255.255.0.0 -m arp-alert

How to Check your logs

So everything is logged in /var/log/syslog, if you want to filter out arpwatch logs. This a possible way to go about it. Mind you will need to edit this grep based on whatever your are mining log file for. Hope this was helpful.

cat syslog | grep -i arpwatch | grep -i reuse | cut -d” ” -f11 | sort | uniq

Plesk Mysql Queries Cheat Sheet

I decided to blog this for people who use parallels plesk with mysql. I got some Parallel Certifications a couple years ago, this stuff will help you mine mysql information that you might need if your running a business.
  1. Domain Information
  2. Domains and IP addresses
  3. Domain Users accounts and passwords
  4. Client usernames/passwords
  5. FTP accounts
  6. ftp users(with domain)
  7. Logrotate config for all domains
  8. DNS records for a domain
  9. DNS primary A-records for all domains
  10. Statistics application per domain
  11. SSL certificates installed under domains
  12. SSL certificate files associated with default domain on IP
  13. SSL certificate files associated with IP address
  14. SSL certificate files not in use by any domain
  15. Domains expiration in UNIX time
  16. Domains expiration in human readable time
  17. Bandwidth by service for the month(change date string accordingly)
  18. Disk usage per service by domain
  19. Mail Info
  20. Mail accounts
  21. All enabled mailboxes (local or redirect)
  22. List bounces
  23. List status of all mail to non-existent users:
  24. All (singular) email info
  25. List all Mail redirect/forwards:
  26. List all Mail redirect/forwards to external domains:
  27. Email Aliases
  28. Email Groups
  29. Email Autoresponders
  30. Mailbox quota size per domain:
  31. Databases
  32. Show databases by domain
  33. Show database users and passwords created in Plesk
  34. User Accounts
  35. ftp users(with domain):
  36. ftp users with additional details(shell,quota):
  37. database users(with domain):
  38. web users:
  39. subdomains usernames/passwords:
  40. protected directories (htpasswd):
  41. One Time Use
  42. Redirect update from previous install

Domain Information
Domains and IP addresses
select domains.name,IP_Addresses.ip_address from domains,hosting,IP_Addresses where domains.id=hosting.dom_id and hosting.ip_address_id=IP_Addresses.id order by IP_Addresses.ip_address,domains.name;

Domain Users accounts and passwords

mysql psa -uadmin -p`cat /etc/psa/.psa.shadow` -e ‘select domains.name, accounts.password from domains, accounts, dom_level_usrs where domains.id=dom_level_usrs.dom_id and accounts.id=dom_level_usrs.account_id order by domains.name;’

select domains.name,sys_users.login,accounts.password from domains,sys_users,hosting,accounts where domains.id=hosting.dom_id and hosting.sys_user_id=sys_users.id and sys_users.account_id=accounts.id order by domains.name;

Domain Users accounts and passwords and email.
select domains.name,sys_users.login,accounts.password,clients.email from domains,sys_users,hosting,accounts,clients where domains.id=hosting.dom_id and hosting.sys_user_id=sys_users.id and sys_users.account_id=accounts.id and clients.id=domains.cl_id order by domains.name;

Client usernames/passwords

select clients.login, accounts.password from clients,accounts where clients.account_id=accounts.id;

FTP accounts

mysql psa -uadmin -p`cat /etc/psa/.psa.shadow` -e ‘select sys_users.home,sys_users.login,accounts.password from sys_users,accounts where sys_users.account_id=accounts.id order by home;’

ftp users(with domain)

select domains.name,sys_users.login,accounts.password from domains,sys_users,hosting,accounts where domains.id=hosting.dom_id and hosting.sys_user_id=sys_users.id and sys_users.account_id=accounts.id order by domains.name;

Logrotate config for all domains

select domains.name,log_rotation.period_type,log_rotation.period,log_rotation.max_number_of_logfiles,log_rotation.turned_on from domains,dom_param,log_rotation where domains.id=dom_param.dom_id and dom_param.param=”logrotation_id” and dom_param.val=log_rotation.id;

DNS records for a domain

select domains.name,dns_recs.host,dns_recs.type,dns_recs.val from domains,dns_recs where domains.dns_zone_id=dns_recs.dns_zone_id and domains.name=’nicktailor.com’;

DNS primary A-records for all domains

select dns_recs.host,dns_recs.type,dns_recs.val from domains,dns_recs where domains.dns_zone_id=dns_recs.dns_zone_id and dns_recs.type=’A’ and domains.name=substring_index(dns_recs.host,’.’,2) order by domains.name;

Statistics application per domain

select domains.name,hosting.webstat from domains, hosting where domains.id=hosting.dom_id;

List subdomains by domain
select subdomains.name,domains.name as domain from domains,sys_users,subdomains,accounts where domains.id=subdomains.dom_id and subdomains.sys_user_id=sys_users.id and sys_users.account_id=accounts.id;

select subdomains.name,domains.name as domain from domains,sys_users,subdomains,accounts where domains.id=subdomains.dom_id and subdomains.sys_user_id=sys_users.id and sys_users.account_id=accounts.id and domains.name = ‘test.com’;

SSL certificates installed under domains
select domains.name as domain_name,IP_Addresses.ip_address,certificates.name as cert_name,certificates.cert_file from domains,IP_Addresses,certificates,hosting where domains.cert_rep_id != “NULL” and domains.id=hosting.dom_id and hosting.ip_address_id=IP_Addresses.id and domains.cert_rep_id=certificates.id;

SSL certificate files associated with default domain on IP
select domains.name as domain,IP_Addresses.ip_address,certificates.name,certificates.cert_file from domains,certificates,IP_Addresses where IP_Addresses.ssl_certificate_id=certificates.id and IP_Addresses.default_domain_id=domains.id order by domains.name;

SSL certificate files associated with IP address
select IP_Addresses.ip_address,certificates.cert_file from certificates,IP_Addresses where IP_Addresses.ssl_certificate_id=certificates.id;

SSL certificate files not in use by any domain
select IP_Addresses.ip_address,certificates.name,certificates.cert_file from certificates,IP_Addresses where IP_Addresses.ssl_certificate_id=certificates.id and IP_Addresses.default_domain_id < 1 and certificates.name not like “%default%”;

Domains expiration in UNIX time

select domains.name, Limits.limit_name, Limits.value from domains, Limits where domains.limits_id=Limits.id and Limits.limit_name=”expiration” and Limits.value != -1;

Domains expiration in human readable time
mysql psa -uadmin -p`cat /etc/psa/.psa.shadow` -e ‘select domains.name, Limits.limit_name, from_unixtime(Limits.value) from domains, Limits where domains.limits_id=Limits.id and Limits.limit_name=”expiration” and Limits.value != -1;’

Bandwidth by service for the month(change date string accordingly)

select domains.name as domain, SUM(DomainsTraffic.http_out)/1024/1024 as HTTP_out_MB, SUM(DomainsTraffic.ftp_out)/1024/1024 as FTP_out_MB, SUM(DomainsTraffic.smtp_out)/1024/1024 as SMTP_out_MB, SUM(DomainsTraffic.pop3_imap_out)/1024/1024 as POP_IMAP_out_MB from domains,DomainsTraffic where domains.id=DomainsTraffic.dom_id and date like “2009-10%” group by domain;

Disk usage per service by domain
select domains.name,disk_usage.*,httpdocs+httpsdocs+subdomains+web_users+anonftp+logs+dbases+mailboxes+webapps+maillists+domaindumps+configs+chroot as total from domains,disk_usage where domains.id=disk_usage.dom_id order by total;

Mail Info
Mail accounts
mysql psa -uadmin -p`cat /etc/psa/.psa.shadow` -e ‘select concat(mail.mail_name,”@”,domains.name) as address,accounts.password from mail,domains,accounts where mail.dom_id=domains.id and mail.account_id=accounts.id order by address;’

mysql> select pname,email from clients;(list all clients name and emails)

All enabled mailboxes (local or redirect)
SELECT mail.mail_name,domains.name,accounts.password,mail.postbox, mail.redirect, mail.redir_addr FROM mail,domains,accounts WHERE mail.dom_id=domains.id AND mail.account_id=accounts.id and (mail.postbox=’true’ or mail.redirect=’true’) ORDER BY domains.name,mail.mail_name;

List bounces If checking for backscatter, be sure to check for autoresponders too. 
select domains.name from domains,Parameters,DomainServices where DomainServices.type=’mail’ and Parameters.value = ‘bounce’ and domains.id = DomainServices.dom_id and DomainServices.parameters_id=Parameters.id order by domains.name;

List status of all mail to non-existent users:

select domains.name,Parameters.value from domains,Parameters,DomainServices where DomainServices.type=’mail’ and Parameters.value in (‘catch’,’reject’,’bounce’) and domains.id=DomainServices.dom_id and DomainServices.parameters_id=Parameters.id order by Parameters.value,domains.name;

All (singular) email info

SELECT mail.mail_name,domains.name,accounts.password,mail.redir_addr FROM mail,domains,accounts WHERE mail.dom_id=domains.id AND mail.account_id=accounts.id ORDER BY domains.name,mail.mail_name;

List all Mail redirect/forwards:
SELECT mail.mail_name,domains.name,mail.redir_addr FROM mail,domains WHERE mail.redirect=’true’ AND mail.dom_id=domains.id AND mail.redir_addr!=” ORDER BY mail.mail_name;

List all Mail redirect/forwards to external domains:
SELECT mail.mail_name,domains.name,mail.redir_addr FROM mail,domains WHERE mail.redirect=’true’ AND mail.dom_id=domains.id AND mail.redir_addr!=” AND SUBSTRING_INDEX(mail.redir_addr,’@’,-1) NOT IN (SELECT name from domains) ORDER BY domains.name,mail.mail_name;

Email Aliases

select mail.mail_name, domains.name, mail_aliases.alias from mail, domains, mail_aliases where mail.dom_id=domains.id and mail.id=mail_aliases.mn_id;

Email Groups
select mail.mail_name as group_mailbox,domains.name,mail_redir.address as group_member from mail,domains,mail_redir where mail.dom_id=domains.id and mail.id=mail_redir.mn_id and mail.mail_group=’true’ order by domains.name,mail.mail_name,mail_redir.address;

Email Autoresponders
select mail.mail_name, domains.name as domain, mail_resp.resp_name, mail_resp.resp_on, mail_resp.key_where as filter, mail_resp.subject, mail_resp.reply_to from mail,domains,mail_resp where mail.dom_id=domains.id and mail.id=mail_resp.mn_id and mail.autoresponder=’true’ and mail_resp.resp_on=’true’;

Mailbox quota size per domain:
select domains.name,Limits.limit_name,Limits.value/1024/1024 as “quota MB” from domains,Limits where Limits.limit_name=’mbox_quota’ and domains.limits_id=Limits.id;
Databases

Show databases by domain
select domains.name as Domain, data_bases.name as DB from domains, data_bases where data_bases.dom_id=domains.id order by domains.name;

Show database users and passwords created in Plesk
select name,login,password from psa.db_users, psa.accounts, psa.data_bases where psa.db_users.account_id=psa.accounts.id and psa.data_bases.id=psa.db_users.db_id;

User Accounts

ftp users(with domain):
select domains.name,sys_users.login,accounts.password from domains,sys_users,hosting,accounts where domains.id=hosting.dom_id and hosting.sys_user_id=sys_users.id and sys_users.account_id=accounts.id order by domains.name;

ftp users with additional details(shell,quota):
select domains.name,sys_users.login,accounts.password,sys_users.shell,sys_users.quota from domains,sys_users,hosting,accounts where domains.id=hosting.dom_id and hosting.sys_user_id=sys_users.id and sys_users.account_id=accounts.id order by domains.name;

database users(with domain):

select domains.name as domain_name, data_bases.name as DB_name,db_users.login,password from db_users, accounts, data_bases,domains where domains.id=data_bases.dom_id and db_users.account_id=accounts.id and data_bases.id=db_users.db_id order by domains.name;

web users:

select domains.name, sys_users.login, web_users.sys_user_id from domains,sys_users,web_users where domains.id=web_users.dom_id and web_users.sys_user_id=sys_users.id;

subdomains usernames/passwords:

select subdomains.name,domains.name as domain, sys_users.login, accounts.password from domains,sys_users,subdomains,accounts where domains.id=subdomains.dom_id and subdomains.sys_user_id=sys_users.id and sys_users.account_id=accounts.id;

protected directories (htpasswd):
select domains.name, protected_dirs.path, pd_users.login, accounts.password from domains, protected_dirs, pd_users, accounts where domains.id=protected_dirs.dom_id and protected_dirs.id=pd_users.pd_id and pd_users.account_id=accounts.id;

One Time Use
Redirect update from previous install This was for an instance where redirects were brought over from a previous installation, but the previous migration failed to check if the redirects were active or not. This compares the two, and only updates the differences. 

UPDATE mail SET redirect=’false’ WHERE id IN (SELECT mail_copy.id FROM mail_copy,domains WHERE mail_copy.redirect=’true’ AND mail_copy.dom_id=domains.id AND mail_copy.redir_addr!=” and CONCAT(mail_copy.mail_name,’@’,domains.name) IN (SELECT CONCAT(mail.mail_name,’@’,domains.name) AS address FROM psa_orig.mail,psa_orig.domains WHERE mail.redirect=’false’ AND mail.dom_id=domains.id AND mail.redir_addr!=”));

Varnish 3.0 Setup in HA for Drupal 7 with Redhat (Part 2)

Varnish 3.0 Setup in HA for Drupal 7 using Redhat Servers

So now that you have read the Varnish and how it works posting of my blog. We can begin with how I went about setting my varnish. The Diagram above is basically the same setup we had.

Since we were using redhat and this was going into production eventually. I decided it was best to stick to repos, now keep in mind you don’t have to do this. You can go ahead and compile your own version if you wish. For the purpose of my tutorial, we’re going to use third party repo called EPEL.

  1. Installing Varnish 3.0 on Redhat
  • This configuration is based on lullobot’s setup, with some tweaks and stuff I found that he forgot to mention which I spent hours learning.

Varnish is distributed in the EPEL (Extra Packages for Enterprise Linux) package repositories. However, while EPEL allows new versions to be distributed, it does not allow for backwards-incompatible changes. Therefore, new major versions will not hit EPEL and it is therefore not necessarily up to date. If you require a newer major version than what is available in EPEL, you should use the repository provided by varnish-cache.org.

To use the varnish-cache.org repository, run

rpm --nosignature -i http://repo.varnish-cache.org/redhat/varnish-3.0/el5/noarch/varnish-release-3.0-1.noarch.rpm

and then run

yum install varnish

The --no-signature is only needed on initial installation, since the Varnish GPG key is not yet in the yum keyring

Note: So after you install it, you will notice that the daemon will not start. Total piss off right? This is because you need to configure a few things based on the resources you have available. This is explained all my Varnish how it works post, which you have already read 😛

2.    So, we need to get varnish running first before we can play with it this is done /etc/sysconfig/varnish. These are the settings I used for my configuration. My VM’s had 2 CPU’s and 4 gigs of ram each.

If you want to know what these options do, go read my previous post. It would take too long to explain each flag in this post, and this will get boring hence why I wrote it in two parts. Save the file and then start varnish /etc/init.d/varnish start. If it doesn’t start you have a mistake somewhere in here.

DAEMON_OPTS=”-a *:80,*:443 \
-T 127.0.0.1:6082 \
-f /etc/varnish/default.vcl \
-u varnish -g varnish \
-S /etc/varnish/secret \
-p thread_pool_add_delay=<Number of CPU cores> \\
-p thread_pools=Number of CPU cores\
-p thread_pool_max=1500 \
-p listen_depth=2048 \
# -p lru_interval=1500 \
-h classic,169313 \
-p obj_workspace=4096 \
-p connect_timeout=600 \
-p sess_workspace=50000 \
-p max_restarts=6 \
-s malloc,2G”

3.  now that varnish is started you need to setup the VCL which it will read. The better you understand how your application works, the better you will be able to fine time the way the cache works. There is no one way to do this. This is simply how I went about it.

VCL Configuration

The VCL file is the main location for configuring Varnish and it’s where we’ll be doing the majority of our changes. It’s important to note that Varnish includes a large set of defaults that are always automatically appended to the rules that you have specified. Unless you force a particular command like “pipe”, “pass”, or “lookup”, the defaults will be run. Varnish includes an entirely commented-out default.vcl file that is for reference.

So this configuration will be connecting to two webserver backends. Each webserver has a health probe which the VCL is checking. If the probe fails it removes the webserver from the caching round robin. The cache is also updating every 30 seconds as long as one of the webservers is up and running. If both webservers go down it will server objects from the cache up to 12 hours, this is varied depending on how you configure it.

http://www.nicktailor.com/files/default.vcl

Now what some people do is they have a php script that sits on the webservers, which varnish will run and if everything passes the web server stays in the pool. I didn’t bother to do it this way. I setup a website that connected to a database and had the health probe just look for status 200 code. If the page came up web server stayed in the pool. if it didn’t it will it drop it.

# Define the list of backends (web servers).

# Port 80 Backend Servers

backend web1 { .host = “status.nicktailor.com”; .probe = { .url = “/”; .interval = 5s; .timeout = 1s; .window = 5;.threshold = 3; }}

backend web2 { .host = “status.nicktailor.com”; .probe = { .url = “/”; .interval = 5s; .timeout = 1s; .window = 5;.threshold = 3; }}

Caching Even if Apache Goes Down

So a few people have written articles about this and say how to do it, but it took me a bit to get this working.

Even in an environment where everything has a redundant backup, it’s possible for the entire site to go “down” due to any number of causes. A programming error, a database connection failure, or just plain excessive amounts of traffic. In such scenarios, the most likely outcome is that Apache will be overloaded and begin rejecting requests. In those situations, Varnish can save your bacon with theGrace period. Apache gives Varnish an expiration date for each piece of content it serves. Varnish automatically discards outdated content and retrieves a fresh copy when it hits the expiration time. However, if the web server is down it’s impossible to retrieve the fresh copy. “Grace” is a setting that allows Varnish to serve up cached copies of the page even after the expiration period if Apache is down. Varnish will continue to serve up the outdated cached copies it has until Apache becomes available again.

To enable Grace, you just need to specify the setting in vcl_recv and invcl_fetch:

 

# Respond to incoming requests.
sub vcl_recv {
# Allow the backend to serve up stale content if it is responding slowly.
set req.grace = 6h;
}

# Code determining what to do when serving items from the Apache servers.
sub vcl_fetch {
# Allow items to be stale if needed.
set beresp.grace = 6h;
}

 

Note- The missing piece to this is the most important piece, without the ttl below if webservers go down, after like 2 mins your backend will show an error page, because by default the ttl for objects to stay in the cache when the webservers aka backend go down is set extremely low. Everyone seems to forget to mention this crucial piece of information.

# Code determining what to do when serving items from the Apache servers.

sub vcl_fetch {

# Allow items to be stale if needed.

set beresp.grace = 24h;
set beresp.ttl = 6h;
}

Just remember: while the powers of grace are awesome, Varnish can only serve up a page that it has already received a request for and cached. This can be a problem when you’re dealing with authenticated users, who are usually served customized versions of pages that are difficult to cache. If you’re serving uncached pages to authenticated users and all of your web servers die, the last thing you want is to present them with error messages. Instead, wouldn’t it be great if Varnish could “fall back” to the anonymous pages that it does have cached until the web servers came back? Fortunately, it can — and doing this is remarkably easy! Just add this extra bit of code into the vcl_recv sub-routine:

 

# Respond to incoming requests.
sub vcl_recv {
# …code from above.

# Use anonymous, cached pages if all backends are down.
if (!req.backend.healthy) {
unset req.http.Cookie;
}
}

Varnish sets a property req.backend.health if any web server is available. If all web servers go down, this flag becomes FALSE. Varnish will strip the cookie that indicates a logged-in user from incoming request, and attempt to retrieve an anonymous version of the page. As soon as one server becomes healthy again, Varnish will quit stripping the cookie from incoming requests and pass them along to Apache as normal.

Making Varnish Pass to Apache for Uncached Content

Often when configuring Varnish to work with an application like Drupal, you’ll have some pages that should absolutely never be cached. In those scenarios, you can easily tell Varnish to not cache those URLs by returning a “pass” statement.

# Do not cache these paths.
if (req.url ~ “^/status\.php$” ||
req.url ~ “^/update\.php$” ||
req.url ~ “^/ooyala/ping$” ||
req.url ~ “^/admin/build/features” ||
req.url ~ “^/info/.*$” ||
req.url ~ “^/flag/.*$” ||
req.url ~ “^.*/ajax/.*$” ||
req.url ~ “^.*/ahah/.*$”) {
return (pass);
}

Varnish will still act as an intermediary between requests from the outside world and your web server, but the “pass” command ensures that it will always retrieve a fresh copy of the page.

In some situations, though, you do need Varnish to give the outside world a direct connection to Apache. Why is it necessary? By default, Varnish will always respond to page requests with an explicitly specified “content-length”. This information allows web browsers to display progress indicators to users, but some types of files don’t have predictable lengths. Streaming audio and video, and any files that are being generated on the server and downloaded in real-time, are of unknown size, and Varnish can’t provide the content-length information. This is often encountered on Drupal sites when using the Backup and Migrate module, which creates a SQL dump of the database and sends it directly to the web browser of the user who requested the backup.

To keep Varnish working in these situations, it must be instructed to “pipe” those special request types directly to Apache.

# Pipe these paths directly to Apache for streaming.
if (req.url ~ “^/admin/content/backup_migrate/export”) {
return (pipe);
}

 

Just remember: while the powers of grace are awesome, Varnish can only serve up a page that it has already received a request for and cached. This can be a problem when you’re dealing with authenticated users, who are usually served customized versions of pages that are difficult to cache. If you’re serving uncached pages to authenticated users and all of your web servers die, the last thing you want is to present them with error messages. Instead, wouldn’t it be great if Varnish could “fall back” to the anonymous pages that it does have cached until the web servers came back? Fortunately, it can — and doing this is remarkably easy! Just add this extra bit of code into the vcl_recv sub-routine:

# Respond to incoming requests.
sub vcl_recv {
# …code from above.

# Use anonymous, cached pages if all backends are down.
if (!req.backend.healthy) {
unset req.http.Cookie;
}
}

Varnish sets a property req.backend.health if any web server is available. If all web servers go down, this flag becomes FALSE. Varnish will strip the cookie that indicates a logged-in user from incoming request, and attempt to retrieve an anonymous version of the page. As soon as one server becomes healthy again, Varnish will quit stripping the cookie from incoming requests and pass them along to Apache as normal.

Making Varnish Pass to Apache for Uncached Content

Often when configuring Varnish to work with an application like Drupal, you’ll have some pages that should absolutely never be cached. In those scenarios, you can easily tell Varnish to not cache those URLs by returning a “pass” statement.

# Do not cache these paths.
if (req.url ~ “^/status\.php$” ||
req.url ~ “^/update\.php$” ||
req.url ~ “^/ooyala/ping$” ||
req.url ~ “^/admin/build/features” ||
req.url ~ “^/info/.*$” ||
req.url ~ “^/flag/.*$” ||
req.url ~ “^.*/ajax/.*$” ||
req.url ~ “^.*/ahah/.*$”) {
return (pass);
}

Varnish will still act as an intermediary between requests from the outside world and your web server, but the “pass” command ensures that it will always retrieve a fresh copy of the page.

In some situations, though, you do need Varnish to give the outside world a direct connection to Apache. Why is it necessary? By default, Varnish will always respond to page requests with an explicitly specified “content-length”. This information allows web browsers to display progress indicators to users, but some types of files don’t have predictable lengths. Streaming audio and video, and any files that are being generated on the server and downloaded in real-time, are of unknown size, and Varnish can’t provide the content-length information. This is often encountered on Drupal sites when using the Backup and Migrate module, which creates a SQL dump of the database and sends it directly to the web browser of the user who requested the backup.

To keep Varnish working in these situations, it must be instructed to “pipe” those special request types directly to Apache.

# Pipe these paths directly to Apache for streaming.
if (req.url ~ “^/admin/content/backup_migrate/export”) {
return (pipe);
}

How to view the log and what to look for

varnishlog |grep -i -v ng (This will output a one page out of the log so you can see it without it going all over the place)

  •       One of the key things to look for is if your back end is healthy, it should show that in this log, if it does not show this, then something is still wrong. I have jotted down what it should look like below.

Every poll is recorded in the shared memory log as follows:

NB: subject to polishing before 2.0 is released!

    0 Backend_health - b0 Still healthy 4--X-S-RH 9 8 10 0.029291 0.030875 HTTP/1.1 200 Ok

The fields are:

  • 0 — Constant
  • Backend_health — Log record tag
  • – — client/backend indication (XXX: wrong! should be ‘b’)
  • b0 — Name of backend (XXX: needs qualifier)
  • two words indicating state:
    • “Still healthy”
    • “Still sick”
    • “Back healthy”
    • “Went sick”

Notice that the second word indicates present state, and the first word == “Still” indicates unchanged state.

  • 4–X-S-RH — Flags indicating how the latest poll went
    • 4 — IPv4 connection established
    • 6 — IPv6 connection established
    • x — Request transmit failed
    • X — Request transmit succeeded
    • s — TCP socket shutdown failed
    • S — TCP socket shutdown succeeded
    • r — Read response failed
    • R — Read response succeeded
    • H — Happy with result
  • 9 — Number of good polls in the last .window polls
  • 8 — .threshold (see above)
  • 10 — .window (see above)
  • 0.029291 — Response time this poll or zero if it failed
  • 0.030875 — Exponential average (r=4) of responsetime for good polls.
  • HTTP/1.1 200 Ok — The HTTP response from the backend.
Some important tools for Varnish
  • Varnishhist – The varnishhist utility reads varnishd(1) shared memory logs and presents a continuously updated histogram show- ing the distribution of the last N requests by their processing. The value of N and the vertical scale are dis- played in the top left corner. The horizontal scale is logarithmic. Hits are marked with a pipe character (“|”), and misses are marked with a hash character (“#”)
  • Varnishtop – The varnishtop utility reads varnishd(1) shared memory logs and presents a continuously updated list of the most commonly occurring log entries. With suitable filtering using the -I-i-X and -x options, it can be used to display a ranking of requested documents, clients, user agents, or any other information which is recorded in the log.

Warming up the Varnish Cache

Note – You will run into this as well. In order for you cache to start working you will need to warm it up. The utility below is what you should use. I went about it a different way while I was testing because I did not know about the tool below. I used a wget script that deleted the pages it downloaded after it was down to warm up my cache when I testing.

Example:
wget –mirror -r -N -D http://www.nicktailor.com – You will need to check the wget flags I did this off memory
  • Varnishreplay -The varnishreplay utility parses varnish logs and attempts to reproduce the traffic. It is typcally used to warm up caches or various forms of testing.The following options are available:
    -abackend Send the traffic over tcp to this server, specified by an address and a port. This option is mandatory. Only IPV4 is supported at this time.
    -D Turn on debugging mode.
    -r file Parse logs from this file. The input file has to be from a varnishlog of the same version as the varnishreplay binary. This option is mandatory.

 

Understanding how Varnish works (Part 1)

I put this post together because you kind of need to understand these things before you try and setup varnish, otherwise you will be trial and error like I was which took a bit a longer. If I had known these things it would of helped.

Varnish 3.0 How it works

I am writing this blog post because when I setup Varnish is very painful to learn, because varnish does not work out of the box. It needs to be configured to even start on redhat. Although there are some great posts out there on how to setup, they all fail to mention key details that every newb wants to know and ends up digging all over the net to find. So I have decided to save everyone the trouble and I’m writing it from beginning to end with descriptions and why and how it all works.

Understanding The Architecture and process model

 

Varnish has two main processes: the management process and the child process. The management process apply configuration changes (VCL and parameters), compile VCL, monitor Varnish, initialize Varnish and provides a command line interface, accessible either directly on the terminal or through a management interface.

The management process polls the child process every few seconds to see if it’s still there. If it doesn’t get a reply within a reasonable time, the management process will kill the child and start it back up again. The same happens if the child unexpectedly exits, for example from a segmentation fault or assert error.

This ensures that even if Varnish does contain a critical bug, it will start back up again fast. Usually within a few seconds, depending on the conditions.

The child process

The child process consist of several different types of threads, including, but not limited to:
Acceptor thread to accept new connections and delegate them. Worker threads – one per session. It’s common to use hundreds of worker threads. Expiry thread, to evict old content from the cache Varnish uses workspaces to reduce the contention between each thread when they need to acquire or modify memory. There are multiple workspaces, but the most important one is the session workspace, which is used to manipulate session data. An example is changing www.example.com to example.com before it is entered into the cache, to reduce the number of duplicates.

It is important to remember that even if you have 5MB of session workspace and are using 1000 threads, the actual memory usage is not 5GB. The virtual memory usage will indeed be 5GB, but unless you actually use the memory, this is not a problem. Your memory controller and operating system will keep track of what you actually use.

To communicate with the rest of the system, the child process uses a shared memory log accessible from the file system. This means that if a thread needs to log something, all it has to do is grab a lock, write to a memory area and then free the lock. In addition to that, each worker thread has a cache for log data to reduce lock contention.

The log file is usually about 90MB, and split in two. The first part is counters, the second part is request data. To view the actual data, a number of tools exist that parses the shared memory log. Because the log-data is not meant to be written to disk in its raw form, Varnish can afford to be very verbose. You then use one of the log-parsing tools to extract the piece of information you want – either to store it permanently or to monitor Varnish in real-time.

All of this is logged to syslog. This makes it crucially important to monitor the syslog, otherwise you may never even know unless you look for them, because the perceived downtime is so short.

VCL compilation

Configuring the caching policies of Varnish is done in the Varnish Configuration Language (VCL). Your VCL is then interpreted by the management process into to C and then compiled by a normal C compiler – typically gcc. Lastly, it is linked into the running Varnish instance.

As a result of this, changing configuration while Varnish is running is very cheap. Varnish may want to keep the old configuration around for a bit in case it still has references to it, but the policies of the new VCL takes effect immediately.

Because the compilation is done outside of the child process, there is no risk of affecting the running Varnish by accidentally loading an ill-formated VCL.

A compiled VCL file is kept around until you restart Varnish completely, or until you issue vcl.discard from the management interface. You can only discard compiled VCL files after all references to them are gone, and the amount of references left is part of the output of vcl.list.

Storage backends

Varnish supports different methods of allocating space for the cache, and you choose which one you want with the -s argument.

file
malloc
persistent (experimental)
Rule of thumb: malloc if it fits in memory, file if it doesn’t
Expect around 1kB of overhead per object cached

They approach the same basic problem from two different angles. With the malloc-method, Varnish will request the entire size of the cache with a malloc() (memory allocation) library call. The operating system divides the cache between memory and disk by swapping out what it can’t fit in memory.

The alternative is to use the file storage backend, which instead creates a file on a filesystem to contain the entire cache, then tell the operating system through the mmap() (memory map) system call to map the entire file into memory if possible.

The file storage method does not retain data when you stop or restart Varnish! This is what persistent storage is for. When -s file is used, Varnish does not keep track of what is written to disk and what is not. As a result, it’s impossible to know whether the cache on disk can be used or not — it’s just random data. Varnish will not (and can not) re-use old cache if you use -s file.

While malloc will use swap to store data to disk, file will use memory to cache the data instead. Varnish allow you to choose between the two because the performance of the two approaches have varied historically.

The persistent storage backend is similar to file, but experimental. It does not yet gracefully handle situations where you run out of space. We only recommend using persistent if you have a large amount of data that you must cache and are prepared to work with us to track down bugs.

Tunable parameters

In the CLI:

param.show -l

Varnish has many different parameters which can be adjusted to make Varnish act better under specific workloads or with specific software and hardware setups. They can all be viewed with param.show in the management interface and set with the -p option passed to Varnish – or directly in the management interface.

Remember that changes made in the management interface are not stored anywhere, so unless you store your changes in a startup script, they will be lost when Varnish restarts.

The general advice with regards to parameters is to keep it simple. Most of the defaults are very good, and even though they might give a small boost to performance, it’s generally better to use safe defaults if you don’t have a very specific need.

A few hidden commands exist in the CLI, which can be revealed with help -d. These are meant exclusively for development or testing, and many of them are downright dangerous. They are hidden for a reason, and the only exception is perhaps debug.health, which is somewhat common to use.

The shared memory log

Varnish’ shared memory log is used to log most data. It’s sometimes called a shm-log, and operates on a round-robin capacity.

There’s not much you have to do with the shared memory log, except ensure that it does not cause I/O. This is easily accomplished by putting it on a tmpfs.

This is typically done in ‘/etc/fstab’, and the shmlog is normally kept in ‘/var/lib/varnish’ or equivalent locations. All the content in that directory is safe to delete.

The shared memory log is not persistent, so do not expect it to contain any real history.

The typical size of the shared memory log is 80MB. If you want to see old log entries, not just real-time, you can use the -d argument for varnishlog: varnishlog -d.

Warning: Some packages will use -s file by default with a path that puts the storage file in the same directory as the shmlog. You want to avoid this.

Threading model
The child process runs multiple threads
Worker threads are the bread and butter of the Varnish architecture
Utility-threads
Balance

 

The child process of Varnish is where the magic takes place. It consists of several distinct threads performing different tasks. The following table lists some interesting threads, to give you an idea of what goes on. The table is not complete.Thread-name    Amount of threads         Task

cache-worker    One per active connection           Handle requests
cache-main         One       Startup
ban lurker           One       Clean bans
acceptor              One       Accept new connections
epoll/kqueue    Configurable, default: 2                Manage thread pools
expire   One       Remove old content
backend poll      One per backend poll     Health checks

Most of the time, we only deal with the cache-worker threads when configuring Varnish. With the exception of the amount of thread pools, all the other threads are not configurable.

For tuning Varnish, you need to think about your expected traffic. The thread model allows you to use multiple thread pools, but time and experience has shown that as long as you have 2 thread pools, adding more will not increase performance.

The most important thread setting is the number of worker threads.

Note: If you run across tuning advice that suggests running one thread pool for each CPU core, res assured that this is old advice. Experiments and data from production environments have revealed that as long as you have two thread pools (which is the default), there is nothing to gain by increasing the number of thread pools.

 

Threading parameters
Thread pools can safely be ignored
Maximum: Roughly 5000 (total)
Start them sooner rather than later
Maximum and minimum values are per thread pool

Details of threading parameters

 

While most parameters can be left to the defaults, the exception is the number of threads.Varnish will use one thread for each session and the number of threads you let Varnish use is directly proportional to how many requests Varnish can serve concurrently.The available parameters directly related to threads are:Parameter
Default value

thread_pool_add_delay               2 [milliseconds]
thread_pool_add_threshold      2 [requests]
thread_pool_fail_delay                200 [milliseconds]
thread_pool_max           500 [threads]
thread_pool_min            5 [threads]
thread_pool_purge_delay          1000 [milliseconds]
thread_pool_stack          65536 [bytes]
thread_pool_timeout    300 [seconds]
thread_pools     2 [pools]
thread_stats_rate           10 [requests]

Among these, thread_pool_min and thread_pool_max are most important. The thread_pools parameter is also of some importance, but mainly because it is used to calculate the final number of threads.

Varnish operates with multiple pools of threads. When a connection is accepted, the connection is delegated to one of these thread pools. The thread pool will further delegate the connection to available thread if one is available, put the connection on a queue if there are no available threads or drop the connection if the queue is full. By default, Varnish uses 2 thread pools, and this has proven sufficient for even the most busy Varnish server.

For the sake of keeping things simple, the current best practice is to leave thread_pools at the default 2 [pools].

Number of threads

Varnish has the ability to spawn new worker threads on demand, and remove them once the load is reduced. This is mainly intended for traffic spikes. It’s a better approach to try to always keep a few threads idle during regular traffic than it is to run on a minimum amount of threads and constantly spawn and destroy threads as demand changes. As long as you are on a 64-bit system, the cost of running a few hundred threads extra is very limited.

The thread_pool_min parameter defines how many threads will be running for each thread pool even when there is no load. thread_pool_max defines the maximum amount of threads that will be used per thread pool.

The defaults of a minimum of 5 [threads] and maximum 500 [threads] threads per thread pool and 2 [pools] will result in:

At any given time, at least 5 [threads] * 2 [pools] worker threads will be running

No more than 500 [threads] * 2 [pools] threads will run.

We rarely recommend running with more than 5000 threads. If you seem to need more than 5000 threads, it’s very likely that there is something not quite right about your setup, and you should investigate elsewhere before you increase the maximum value.

For minimum, it’s common to operate with 500 to 1000 threads minimum (total). You can observe if this is enough through varnishstat, by looking at the N queued work requests (n_wrk_queued) counter over time. It should be fairly static after startup.

Timing thread growth

Varnish can use several thousand threads, and has had this capability from the very beginning. Not all operating system kernels were prepared to deal with this, though, so the parameter thread_pool_add_delay was added which ensures that there is a small delay between each thread that spawns. As operating systems have matured, this has become less important and the default value of thread_pool_add_delay has been reduced dramatically, from 20ms to 2ms.

There are a few, less important parameters related to thread timing. The thread_pool_timeout is how long a thread is kept around when there is no work for it before it is removed. This only applies if you have more threads than the minimum, and is rarely changed.

An other is the thread_pool_fail_delay, which defines how long to wait after the operating system denied us a new thread before we try again.

System parameters

As Varnish has matured, fewer and fewer parameters require tuning. The sess_workspace is one of the parameters that could still pose a problem.
sess_workspace – incoming HTTP header workspace (from client)
Common values range from the default of 16384 [bytes] to 10MB
ESI typically requires exponential growth Remember: It’s all virtual – not physical memory.

Workspaces are some of the things you can change with parameters. The session workspace is how much memory is allocated to each HTTP session for tasks like string manipulation of incoming headers. It is also used to modify the object returned from a web server before the precise size is allocated and the object is stored read-only.

Some times you may have to increase the session workspace to avoid running out of workspace.

 

As most of the parameters can be left unchanged, we will not go through all of them, but take a look at the list param.show gives you to get an impression of what they can do.

TimersParameter             Default Description         Scope
connect_timeout             0.700000 [s]        OS/network latency       Backend
first_byte_timeout         60.000000 [s]      Page generation?            Backend
between_bytes_timeout            60.000000 [s]      Hiccoughs?         Backend
send_timeout   60 [seconds]      Client-in-tunnel                Client
sess_timeout    5 [seconds]         keep-alive timeout         Client
cli_timeout         10 [seconds]      Management thread->child        Management

The timeout-parameters are generally set to pretty good defaults, but you might have to adjust them for strange applications. The connection timeout is tuned for a geographically close web server, and might have to be increased if your Varnish server and web server are not close.

Keep in mind that the session timeout affects how long sessions are kept around, which in turn affects file descriptors left open. It is not wise to increase the session timeout without taking this into consideration.

The cli_timeout is how long the management thread waits for the worker thread to reply before it assumes it is dead, kills it and starts it back up. The default value seems to do the trick for most users today.

Now that you have read this you can go read My
Varnish Configuration for Drupal in HA on Redhat 

How to jail users via sftp on Drupal Servers using Aegir

How to jail users via sftp on Drupal Servers

You will need to ensure your openssh server your running is at least 5.1 If it not then please check out “How to jail subdomain sftp users via chroot with plesk” in my blog, it will have instructions on how to update your openssh if your running redhat or any similar OS.

Note: /etc/ssh/sshd_config (this config is a slightly different on Drupal servers than plesk ones so that Dreamweaver could sftp)
===================================
# override default of no subsystems
#Subsystem sftp /usr/libexec/openssh/sftp-server 

Subsystem sftp internal-sftp
Match Group sftp
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no
====================================
1. Do the following:
-# usermod -G sftp joe (add the user)
-# usermod -s /bin/false joe (change the bash of the user)
-# chmod -R root:root /home/joe (parent directory has to be owned by root for chroot)
-# chmod 755 /home/joe (Permissions on parent directory has to be 755 for sftp to work via chroot)
-# passwd joe (set password for user)
2. Create a directory inside the home directory of the new user and name it the same of of the directory you want them to be jailed to
mkdir /home/joe/(same name of directory you want user to be jailed to)
eg mkdir /home/joe/jailed
3. Now you are going to mount the directory that you wanted the user jailed into to the new users home directory
#- Mount –bind <fullpathofdirectoryyouwanttojailuser> <pathtonewusershomdirectory>
Eg.
Mount –bind /www_data/sites/drupal-6.19/sites/test.com/jailed /home/joe/jailed
Note: I create this file and add it to /etc/rc.local so that if your server reboots, you wont loose your mounts 
4. Add the above line to /etc/init.d/sftpjailedmounts.sh <–this is so if you reboot the server the mounts arent lost, this file is loaded by /etc/rc.local
5. Now your going to change the permissions inside their home directory so the sftp user will be able to ftp files

#- chown test2:aegir /home/joe/jailed
If you want to see your mount simply type mount and you will them.
eg.
[root@dpadmprod11 jhall]# mount
/dev/mapper/VGroot-LVroot on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/mapper/VGroot-LVlocal on /local type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
WEBI_NASprod:/vol/WEBI_DpProdConfig/www_config on /www_config type nfs (rw,addr=10.90.20.6)
WEBI_NASprod:/vol/WEBI_DpProdData/www_data on /www_data type nfs (rw,addr=10.90.20.6)
/www_data/sites/drupal-6.19/sites/test.com/webcam on /home/webcam/webcam type none (rw,bind)
/www_data/sites/drupal-6.19/sites/test.com/pharmdprivate on /home/pharmsci/pharmdprivate type none (rw,bind)
/www_data/sites/drupal-6.19/sites/pharmacy.ubc.ca/jailed on /home/joe/jailed type none (rw,bind) <——

Jail Secondary FTP/Webuser Accounts with Plesk via SFTP

How to Jail Secondary FTP/Webuser Accounts with Plesk via SFTP

1.Log into Plesk and Create Secondary (“WebUser” inside plesk)user/password (You need to do this so the client can update the password for the user from the GUI)

2.Mkdir /home/newuseryoucreatedinplesk (since you created the user in plesk, the user homedirectory will need to manually created for jailing purposes)
eg. Mkdir /home/superman

3.? Next you want to do the following:
-# usermod -G sftp superman (add the user)
-# usermod -s /bin/false superman (change the bash of the user)
-# chmod -R root:root /home/superman (parent directory has to be owned by root for chroot)
-# chmod 755 /home/superman (Permissions on parent directory has to be 755 for sftp to work via chroot)

4. Edit /etc/passwd file and change the directory path of superman to /home/superman (You need to do this since plesk created the user, do not change the UID as this may be saved somewhere in plesk)

eg. superman:x:10034:2522::/home/superman:/bin/false

5. Now you are going to mount the directory that you wanted the user jailed into to the new users home directory

#- Mount –bind <fullpathofdirectoryyouwanttojailuser> <pathtonewusershomdirectory>
Eg.
Mount –bind /www_data/test.com/httpdocs/jailed /home/superman/jailed

Note:so I create this file give it +x permissions and add it to /etc/rc.local so that if the server reboots you don’t loose your mounts.
6. Add the above line to /etc/init.d/sftpjailedmounts.sh <–this is so if you reboot the server the mounts arent lost, this file is loaded by /etc/rc.local

7. Now your going to change the permissions inside their home directory so the sftp user will be able to ftp files
#- chown superman:sftp /home/superman/jailed

8. Test and Ensure you can update the password from plesk admin panel for the client

If you want to see your mount simply type mount and you will them.
eg.

[root@test]# mount
/dev/mapper/VGroot-LVroot on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/mapper/VGroot-LVlocal on /local type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
WEBI_NASdevl:/vol/WEBI_VerfConfig/www_config on /www_config type nfs (rw,addr=10.90.20.8)
WEBI_NASdevl:/vol/WEBI_VerfData/www_data on /www_data type nfs (rw,addr=10.90.20.8)
tmpfs on /usr/local/psa/handlers/before-local type tmpfs (rw)
tmpfs on /usr/local/psa/handlers/before-queue type tmpfs (rw)
tmpfs on /usr/local/psa/handlers/before-remote type tmpfs (rw)
tmpfs on /usr/local/psa/handlers/info type tmpfs (rw)
tmpfs on /usr/local/psa/handlers/spool type tmpfs (rw,mode=0770,uid=2021,gid=31)
/www_data/test.com/httpdocs/jailed on /home/superman/jailed type none (rw,bind)<——

 

 

 

 

 

0