Another story about the home server, or operation "silence"

    Good afternoon, habrayuzery!

    imageSurely every developer, sooner or later, is faced with a test environment problem. It is not very convenient to keep the PC turned on 24x7, and even flood it with software for experiments to the eyeballs. On the other hand, finding a hosting to your own taste, and even not expensive, is also not an easy task. What to do? There is a solution - expand the sandbox at home. As I was convinced, more on this below, it is not expensive, it is convenient and very exciting.

    Disclaimer: the post is published at the request of icepro , so do not rush to change karma to me, but rather help a good person with an invite. You will not find revelations or unexpected solutions here, but if you go this way you can find everything you need in one place.

    I’ll warn in advance: article one, do not hit hard.

    Now for the details. I ask you in advance to bearded admins and all competent people in this topic, not to grab hold of my heart, to fuck and rush to put minuses, because I only study and did not deal with administration tightly. But I hope beginners can shorten the path in Linux by a couple of steps and get to know each other really quickly :)

    Summary

    1. Filling
    2. OS
    3. Dev environment
    4. Backup

    Filling


    So, there is an idea, we will begin to implement it. The main criteria for me when choosing iron were price, low power consumption and lack of noise (otherwise there was a risk that my wife would send my sandbox to the balcony, and maybe even with me :)). All this indicated that the target form factor would be MiniITX.
    imageimage

    After wandering around the online stores, I found this motherboard: Intel BOXD2500HN .
    It is based on the Intel Atom D2500 processor (1.86 GHz) . For me, this power was enough. Consumes energy - 10 watts. The price is about $ 70. Passive cooling - no noise. Also at home lay SO-DIMM memory of 2 GB and a half-terabyte screw.
    Now about the case. Again, I wanted less noise, and so the choice fell on the case with an external power supply unit - DELUX E-2012 Black Mini-ITX
    Total only cooler - 40 mm, located above the hard drive (well, there was a regular place, so I decided to complete it completely).

    OS


    The choice of OSes was not long. Previously, I had to work with Ubuntu, so the choice fell on its foundation - Debian. At the time of writing, the latest version (which I installed) was 7.0 - Wheezy. You can download it on the offsite, where it is presented in several variations depending on the graphic shell. I did not dare to install a bare console, and therefore among the presented options I chose the most lightweight - LXDE.

    I’ll make a reservation right away - Debian impressed me with the wealth of the software repository. You can find anything in it. In order to search by name for a program of interest, we use the command:
    sudo apt-cache search <название программы, или его часть>
    
    for installation we perform
    sudo apt-get install <название программы>
    

    Later in the text I will mention the installation of software, but you will already know how to do it.

    Installing Debian is not difficult. Download the image and make a bootable USB flash drive ( LinuxLive USB Creator will help to make it ). Next, boot from it and turn on the graphical installer (it will be easier). Basically, the installation is similar to installing windows: click Next, Next, Next. But there are still some points:
    - at the partition selection step, we select separate sections
    - after the auto-partitioning wizard shows you the proposed structure - do not agree, increase the RootFS (it’s also “/”) to at least a couple of gigabytes (I now have 512 MB and the section has to be closely monitored)
    - we leave the choice of software as it is (daws on “Desktop Environment” and “Standard System”, we will
    install the rest later) If you still need step-by-step installation assistance, I recommend finding the Maunal “Web server on Debian GNU_Linux for beginners” on the Internet .

    Next, several small tweaks of the system were produced:
    1. sudo


      In order for your user to be able to execute commands on behalf of the superuser, you need to add him to the sudoer list. Details of the configuration can be found in the document that I mentioned above, in the section “1.2.1 Basic setup of sudo”
    2. Remote access


      In order to be able to remotely access, I'm not talking about ssh, but just the desktop, the VNC server - x11vnc was installed.
      The setup is quite easy, first we generate an authorization file:
      x11vnc -storepasswd <pass> <file>
      

      and then add the VNC server to autoload (/ etc / xdg / lxsession / LXDE / autostart)
      @/usr/bin/x11vnc -dontdisconnect -display :0 -notruecolor -noxfixes -shared -forever -rfbport 5900 -bg -o /var/log/x11vnc.log -rfbauth /home/ice/.vnc/passwd
      

    3. Autologin


      The motherboard can restore its condition after the electricity has been interrupted and resumed. But if the monitor is not connected to the server and the status of the system is not visible, then after a power outage the graphical environment will still hang on the Login page after a reboot. In order to eliminate this inconvenience, we will add the auto-login feature for our user, for this, add the lines to the /etc/lightdm/lightdm.conf file:
      autologin-user=ice
      autologin-user-timeout=0
      

    4. Hardware monitor


      Keeping track of equipment will help
      lshw
      lshw-gtk
      

      For temperature monitoring, I delivered lm-sensors and hddtemp.
      Team
      sensors
      
      shows MP information from available sensors.

      Before using the utility, it is necessary that it detects all the sensors, for this you need to run:
      /usr/sbin/sensors-detect
      

      A team
      hddtemp /dev/sda
      
      tells you what temperature the hard drive is warming up.

      At first, I had the mania to check the temperature and other sensors, so I wrote a short script to collect and log data:
      #!/bin/bashecho'################## TIME ##################'
      date
      echo'################# UP TIME ################'
      uptime
      echo'################# MB TEMP ################'
      sensors
      echo'################ HDD TEMP ################'
      sudo hddtemp /dev/sda
      echoecho

      Now we’ll create a schedule for running the script, but first we will give it the right to execute it:
      visudo
      ice     ALL=NOPASSWD: /home/ice/scripts/monitoring/temp.sh
      
      and now Cron:
      sudo crontab -e -u ice
      */10 * * * * sudo /home/ice/scripts/monitoring/temp.sh >> /home/ice/scripts/monitoring/temp.log 2>&1
      

      You can verify that launches occur using the command:
      grep CRON /var/log/syslog
      
      And one more thing - so that the logs do not accumulate, I set up their rotation. To do this, install Logrotate
      sudo apt-get install logrotate
      

      Next, create a configuration file in the /etc/logrotate.d/ folder. My looks like this:
      /home/ice/scripts/monitoring/temp.log { # путь к логам
        daily # ротировать ежедневно
        missingok # отсутствие файла не является ошибкой
        rotate 30 # сохраняется последние 30 ротированных файлов
        compress # сжимать ротируемый файл
        delaycompress # сжимать предыдущий файл при следующей ротации (т.е. файл *.log.1 будет не сжат, а *.log.2 и далее сжатыми)
        notifempty # не обрабатывать пустые файлы
        create 640 ice ice # сразу после ротации создать пустой файл с заданными правами и пользователем
      }
      

    5. Other little things





    Dev environment


    So, now the most interesting thing is that I managed to cram into this small box.

    LAMP | nginx | Node.js | MongoDB | Git | Java | Python | Ruby | .NET | Jenkins

    Lamp


    Yes, I am an active web-developer and without this platform as without hands. Everything is set elementary:
    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get install apache2 apache2-doc php5 libapache2-mod-php5 php-pear
    sudo apt-get install mysql-server mysql-client php5-mysql
    sudo apt-get install php5-curl php5-gd php5-imagick php5-ldap php5-imap php5-memcache php5-common php5-mysql php5-ps php5-tidy imagemagick php5-xcache php5-xdebug php5-xmlrpc php5-xsl
    
    But right after installation, I failed - php files did not want to be processed by apache. In order to fix this, I made the following manipulations:
    /etc/apache2$ sudo gedit apache2.con
    # перед секцией инклудов добавил
    AddHandler application/x-httpd-php .php .php4 .php3 .html
    AddType application/x-httpd-php .html
    

    For a convenient php debug, I installed and configured xdebug . The installation is described in detail at the following links:
    - Configuring Xdebug for PHP development / Linux
    - Remote Xdebug on PhpStorm

    When creating virtual hosts, do not forget to register them in hosts

    nginx


    To improve Apache performance, it is recommended to use it in conjunction with nginx with the following distribution of roles: apache - backend, nginx - frontend. How to perform such a configuration is described in the article - Installing and Configuring Nginx. Nginx frontend + Apache backend.

    Node.js


    A great platform, especially for creating small client-server applications. Web-sockets alone are worth it. Oh well, back to the installation. Putting node.js is a little trivial, but not difficult - in a true way, that is, from the source:
    sudo apt-get install python g++ make checkinstall
    mkdir ~/src && cd$_
    wget -N http://nodejs.org/dist/node-latest.tar.gz
    tar xzvf node-latest.tar.gz && cd node-v* # убираем "v" из номера версии в окне диалога
    ./configure
    checkinstall 
    sudo dpkg -i node_*
    

    Installation is a little more detailed here - Installing Node.js

    Mongodb


    Why not join the NoSQL community? So I asked this question. Well, maybe I didn’t plan to use NoSQL tightly, but to feel it - why not?
    Install!
    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
    echo'deb http://downloads-distro.mongodb.org/repo/debian-sysvinit dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
    sudo apt-get update
    sudo apt-get install mongodb-10gen
    
    ... and run
    sudo /etc/init.d/mongodb start
    

    Git


    For a long time already I wanted to leave public repositories on github. And now the time has come. Git is not difficult to install, it is called git in the package repository, but configuring it for convenient operation is a bit more complicated, we need to:
    - create a separate user - git
    - install giolite to administer the repositories

    This video helped to overcome the situation when my keys generated in putty were ignored - How To Fix “Server Refused Our Key” Error That Caused By RSA Public Key Generated By Puttygen

    Next, we clone the giolite repository - ssh: //git@192.168.1.110: /gitolite-admin.git and voila, we control the turnips.

    The following links helped to configure git + gitolite:
    - Server setup. Gitolite - hosting git repositories
    - Own git-server from scratch

    For a convenient overview of repositories from the browser there is - gitweb. How to install it is described here - Setting up Gitweb on Debian .
    From myself I will add:
    usermod -a -G gitolite www-data
    
    so that apach and gitolite make friends.

    But setting up a virtual host (bit by bit collected from different sources until it worked)
    <VirtualHost *:81>ServerAdmin webmaster@localhost
            ServerName git-web.loc
            SetEnv  GITWEB_CONFIG   /etc/gitweb.conf
            DocumentRoot /home/git/repositories
            Alias /static/gitweb.css /usr/share/gitweb/static/gitweb.css
            Alias /static/git-logo.png /usr/share/gitweb/static/git-logo.png
            Alias /static/git-favicon.png /usr/share/gitweb/static/git-favicon.png
            Alias /static/gitweb.js /usr/share/gitweb/static/gitweb.js
            Alias /git /home/git/repositories
            ScriptAlias /gitweb.cgi /usr/lib/cgi-bin/gitweb.cgi
            DirectoryIndex gitweb.cgi
            <Directory /home/git/repositories/>Allow from AllOptions +ExecCGI
                    AllowOverrideAllAuthType Basic
                    AuthName"Private Repository"AuthUserFile /home/ice/stuff/keys/.htpasswd-gitweb
                    Require valid-user
                    AddHandler cgi-script .cgi
                    DirectoryIndex gitweb.cgi
                    RewriteEngineOnRewriteCond%{REQUEST_FILENAME}         !-f
                    RewriteRule ^.* /gitweb.cgi/$0 [L,PT]</Directory>SetEnv GIT_PROJECT_ROOT /home/git/repositories
            SetEnv GIT_HTTP_EXPORT_ALL
            ErrorLog${APACHE_LOG_DIR}/git_web_error.log
            LogLevel warn
            CustomLog${APACHE_LOG_DIR}/git_web_access.log combined
    </VirtualHost>
    Basic authorization to taste, you can disable it.


    Java


    Let's move on to the java platform. The first step is to remove openjdk and install Java 7
    sudo apt-get remove openjdk* 
    su -
    # добавим репозитории javaecho"deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main" | tee -a /etc/apt/sources.list
    echo"deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu precise main" | tee -a /etc/apt/sources.list
    apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886
    apt-get update
    # принимаем лицензию Oracle software licenseecho oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections
    # ставим Oracle JDK7
    apt-get install oracle-java7-installer
    # выходим из под rootexit# убедимся что java поставилась
    java -version
    

    Above, I added the following software:
    - scala (just learning why not practice on my server)
    - glassfish - one of the most actively developed (if not the most) application servers
    * if glassfish does not start because port 8080 is busy ( how it turned out for me), then let's change the default port. To do this, in GlassFish_Server \ glassfish \ domains \ domain1 \ config we find our port and set another:
    <network-listener name="http-listener-1" port="8081" protocol="http-listener-1" thread-pool="http-thread-pool" transport="tcp"</network-listener>

    Python


    Python came next with django. Putting out of the repository is pretty easy. This is where you can look at ease of use - Writing your first Django app

    Ruby


    Like python, I also set up the cut with my framework - Rails. It’s a bit more difficult to put, so I’ll give you instructions:
    apt-get install build-essential libapache2-mod-passenger ruby rdoc ruby-dev libopenssl-ruby rubygems
    gem install fastthread
    gem install rails --version 3.0.4
    
    And do not forget to add rails to $ PATH:
    PATH=".../var/lib/gems/VERSION/bin"
    Well, using it is just as easy - Getting Started with Rails

    For internal needs, it was also decided to deploy a bug tracker, and it became Redmine .
    Put it like this:
    # доставляем нужные библиотеки
    aptitude install libmagickcode-dev
    aptitude install libmagickwand-dev
    aptitude install ruby1.9.1-dev
    aptitude install libmysqlclient-dev
    # скачиваем и распаковываем redminecd /opt
    wget http://files.rubyforge.vm.bytemark.co.uk/redmine/redmine-2.3.1.tar.gz
    tar -zxvf redmine-2.3.1.tar.gz
    cd /var/www
    ln -s /redmine-2.3.1/public redmine
    chown -R www-data:www-data /opt/redmine-2.3.1
    

    We go into the mysql client and create the database and user:
    CREATEDATABASE redmine_default CHARACTERSET utf8;
    CREATEUSER'redmine'@'localhost'IDENTIFIEDBY'my-password';
    GRANT ALL PRIVILEGESON redmine_default.* TO'redmine'@'localhost';
    

    Create a database configuration
    cd /redmine-2.3.1/config
    cp database.yml.example database.yml
    vi database.yml
    
    and fill it
    production:
    	adapter: mysql2
    	database: redmine_default
    	host: localhost
    	username: redmine
    	password: my-password
    	encoding: utf8
    

    Create a settings file:
    cd /redmine-2.3.1/config
    cp configuration.yml.example configuration.yml
    vi configuration.yml
    
    and we are configured (the benefit in a config is full of comments).
    Now put ruby ​​bundle
    gem install bundler
    bundle install --without development test postgresql sqlite
    rake generate_secret_token
    bundle install
    
    and prepare the database:
    RAILS_ENV=production rake db:migrate
    RAILS_ENV=production rake redmine:load_default_data
    

    After all this, you need to configure the virtual host in Apache and you can use it.

    The setting (getting information about commits from the repository) is described here - Redmine Settings . C chose an option in which, using Cron, automatic polling of repositories is configured.

    .NET


    Do not forget about the .NET platform. The main components are the mono platform itself and the XSP (ASP.NET server) are installed like this:
    sudo apt-get install mono-common mono-xsp4
    

    That's it, now almost all the delights of .NET are available to us.
    Details can be found here: Mono for Debian .

    Jenkins


    And all this is crowned - the CI server. Let's look at how to put it and configure it for example for ... PHP:
    We put Jenkins
    sudo apt-get update
    sudo apt-get install php5-cli php5-xdebug php-pear ant git
    php -r 'echo "Xdebug loaded? "; echo (extension_loaded("xdebug")) ? "yes" : "no"; echo chr(10);'
    wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
    sudo bash -c "echo 'deb http://pkg.jenkins-ci.org/debian binary/' > /etc/apt/sources.list.d/jenkins.list"
    sudo apt-get update
    sudo apt-get install jenkins
    

    Add plugins
    wget http://localhost:8080/jnlpJars/jenkins-cli.jar
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin checkstyle
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin cloverphp
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin dry
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin htmlpublisher
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin jdepend
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin plot
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin pmd
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin violations
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin xunit
    java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin git
    java -jar jenkins-cli.jar -s http://localhost:8080 safe-restart
    


    Install PHPUnit and additional tools
    sudo pear upgrade PEAR
    sudo pear channel-discover pear.pdepend.org
    sudo pear channel-discover pear.phpmd.org
    sudo pear channel-discover pear.phpunit.de
    sudo pear channel-discover components.ez.no
    sudo pear channel-discover pear.symfony-project.com
    sudo pear install pdepend/PHP_Depend
    sudo pear install phpmd/PHP_PMD
    sudo pear install phpunit/phpcpd
    sudo pear install phpunit/phploc
    sudo pear install PHPDocumentor
    sudo pear install PHP_CodeSniffer
    sudo pear install --alldeps phpunit/PHP_CodeBrowser
    sudo pear install --alldeps phpunit/PHPUnit
    


    Configure:
    - download /build.xml from http://jenkins-php.org/ and edit for your project
    - download PMD rules http://phpmd.org/ and put it as /build/phpmd.xml in the project
    - optionally add its rules http://pear.php.net/ and put it as /build/phpcs.xml in the project
    - configure PHPUnit in /tests/phpunit.xml
    - download the task template
    cd /var/lib/jenkins/jobs/
    sudo git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
    sudo chown -R jenkins:nogroup php-template/
    sudo /etc/init.d/jenkins stop
    sudo /etc/init.d/jenkins start
    
    - create a new task from the template, bind to the repository (you may have to add a plugin for your version control system) and rejoice.

    Backup


    It is clear that it is impossible to avoid equipment breakdowns, but completely different sensations arise with this thought when you have a backup copy of the data (and even fresh).

    The process itself is divided into 2 types:
    - complete (I do it selectively when I feel that a lot of changes have passed in the system and need to be saved)
    - partial (all nodes of the system that are subject to daily changes: databases, source codes, etc.) I

    collect a complete dump using remastersys script.
    View script
    #!/bin/bash
    # замеряем сколько идет резервирование
    start=`date +%s`
    echo'[FULL BACK UP Start]'
    DATE_NOW=`date +%F`
    echo'[FULL BACK UP Dump Creation]'# запускаем резервирование
    sudo remastersys backup install-$DATE_NOW.iso
    echo'[FULL BACK UP Dump Saving]'# перемещаем в основную папку хранения
    sudo cp /home/remastersys/remastersys/install-$DATE_NOW.iso /home/backups/system-iso/install-$DATE_NOW.iso
    sudo cp /home/remastersys/remastersys/install-$DATE_NOW.iso.md5 /home/backups/system-iso/install-$DATE_NOW.iso.md5
    echo'[FULL BACK UP Clean up]'# чистим tmp
    sudo remastersys clean
    echo'[FULL BACK UP End]'
    end=`date +%s`
    runtime=$((end-start))
    echo'Backup time ='$runtime'sec(s)'


    Partial scripting is performed in cron nightly.
    View script
    #!/bin/bash
    
    start=`date +%s`
    echo'[BACK UP Start]'
    DATE_PREF=`date +%F`
    echo'[BACK UP Config]'# настраиваем пути к папкам
    BACKUP_MYSQL_DIR=/home/backups/mysql/$DATE_PREF
    BACKUP_WWW_DIR=/home/backups/www/$DATE_PREF
    BACKUP_GIT_DIR=/home/backups/git/$DATE_PREFecho'[BACK UP Clean up]'# чистим все что лежит дольше недели
    find /home/backups/mysql/ -mtime +7 -print -mindepth 1 -delete >/dev/null 2>&1
    find /home/backups/www/ -mtime +7 -print -mindepth 1 -delete >/dev/null 2>&1
    find /home/backups/git/ -mtime +7 -print -mindepth 1 -delete >/dev/null 2>&1
    echo'[BACK UP Not Clened Items]'
    ls /home/backups/mysql/
    ls /home/backups/www/
    ls /home/backups/git/
    echo'[BACK UP Back Up Hosts]'# резервируем хосты
    tar cpzf $BACKUP_WWW_DIR\-www.tgz /home/www/ >/dev/null 2>&1
    echo'[BACK UP Back Up Repositories]'# резервируем репозитории
    tar cpzf $BACKUP_GIT_DIR\-git.tgz /home/git/ >/dev/null 2>&1
    echo'[BACK UP Back Up MySQL]'# резервируем базы данных
    mysqldump -q -u root -p<password> -h localhost tt_rss | gzip -c > $BACKUP_MYSQL_DIR\-tt_rss.sql.gz
    mysqldump -q -u root -p<password> -h localhost test | gzip -c > $BACKUP_MYSQL_DIR\-test.sql.gz
    mysqldump -q -u root -p<password> -h localhost redmine | gzip -c > $BACKUP_MYSQL_DIR\-redmine.sql.gz
    mysqldump -q -u root -p<password> -h localhost phpmyadmin | gzip -c > $BACKUP_MYSQL_DIR\-phpmyadmin.sql.gz
    mysqldump -q -u root -p<password> --skip-lock-tables -h localhost performance_schema | gzip -c > $BACKUP_MYSQL_DIR\-performance_schema.sql.gz
    mysqldump -q -u root -p<password> --skip-lock-tables -h localhost information_schema | gzip -c > $BACKUP_MYSQL_DIR\-information_schema.sql.gz
    mysqldump -q -u root -p<password> --events -h localhost mysql | gzip -c > $BACKUP_MYSQL_DIR\-mysql.sql.gz
    echo'[BACK UP New Items]'
    ls /home/backups/mysql/ | grep $DATE_PREF
    ls /home/backups/www/ | grep $DATE_PREF
    ls /home/backups/git/ | grep $DATE_PREFecho'[BACK UP End]'
    end=`date +%s`
    runtime=$((end-start))
    echo'Backup time ='$runtime'sec(s)'echo'========================================================='

    Apparently - I keep the last 7 copies.


    Thank! I hope it was interesting!

    PS In case of questions - I will be glad to help.
    PPS Give, invite, please.

    Also popular now: