Bye OpenVZ, Hey LXC

27 January, 2016
linux-containers-logo

Linux Containers (LXC) the new OpenVZ

I recently switched 2/4 proxmox nodes to the latest build (V4). On those OpenVZ has made place for LXC. The reason why, I am not sure ’bout. I believe it was something with kernel modules and licensing, anyway it changed. Also all the commands have changed, not really good for a efficiency, now we have both systems in place, it will take some time to swap. Anyways this definite helps :

Read More

This error occurred on server with yum-cron installed; So I ruled out updates, but I was wrong. An update solved this issue, ’bout 100mb of updates to be exact… A new lesson learned : don’t blindly trust on yum-cron!

As to this error I believe it was solved in one of these packages :

Jan 26 09:23:29 Updated: 1:net-snmp-libs-5.7.2-24.el7.x86_64
Jan 26 09:23:49 Updated: 1:net-snmp-agent-libs-5.7.2-24.el7.x86_64
Jan 26 09:24:35 Updated: 1:net-snmp-5.7.2-24.el7.x86_64
Jan 26 09:24:46 Updated: 1:net-snmp-utils-5.7.2-24.el7.x86_64

 

Googlebot is a real hero, it believes my ancient posts are still not dead. In webmaster tools I still get the message I got 404’s on my website, linking to ancient posts I made a few years back. As long as someone remembers they are not dead! -Keep believing that-  Anyways, while 404 don’t really hurt the website -web is dynamic-, its a clean way to report back that there is little to no chance those exact posts will ever return.

One can simply add 410‘s to the Nginx configuration to show bots and browsers that the page is removed from the website/database and should not be accessed any longer. I added them manually to the location / with this ugly little hack :

location / {
if ($uri ~ "/working-on-server-move/") { return 410; }

if you try something similar first try with a 418 (htcpcp), see a 418 at work.

When installing a 32bit application you get the weirdest problems.

/opt/firebird/bin/gsec: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory

Good thing repo’s are available to solve most of life’s problems :

yum install libstdc++.i686

(on a 64 bit machine/OS)

A small “bugfix” on proxmox 4.1 with ZFS below it, on a clean install, proxmox will fail to make a lxc (linux container). One has to create a zfs storage before containers can be made … The error during creation:

Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /var/lib/vz/images/100-disk-1.raw' failed: exit code 144

The solution :
Datacenter -> Add ZFS
– id : whatever
– zfs pool : I took rpool/ROOT but you are free in this as far as I could see
– I did not restrict nodes, although I would not use it other then local.
– Enable : on
– Thin provision : no clue, I left it unmarked.

Then during the lxc creation select lxc_storage (this is the id you gave)

// update 7/04/2017

  • When importing from console :
    pct restore $container_id container.tar.lzo -storage lxc_storage

     

Welcome readers of /var/log/messages, who have updated your server to the latest version of nfs-utils (at this writing : nfs-utils-1.3.0-0.21.el7).

# yum info nfs-utils
Name        : nfs-utils
Arch        : x86_64
Epoch       : 1
Version     : 1.3.0
Release     : 0.21.el7
Size        : 1.0 M

The full error message in /var/log/messages

Jan 21 14:00:23 mocca nfsdcltrack[]: sqlite_insert_client: insert statement prepare failed: table clients has 2 columns but 3 values were supplied

Its not a critical error, but the latest version of nfs-utils has it. Redhat already has a few bug reports on it, they even have a hidden solution. Which I despise heavily, As I gather from the bug reports, its downgrading to an earlier version of nfs-utils. This I have never done before, but I would think it is yum downgrade nfs-utils this however doesn’t do much. From what I gather from the error message, some sqlite database gets a statement that has a slightly changed table.

The nfsdcltrack tool is not a critical application on my machine, it seems to be only important when the service crashes or a reboot is done.

Anyway at this point the advice is : ignore the message. Damn Redhat on its hidden solution policy!

I got this message on a machine with 2 soft mounts.

Server :

cat /etc/exports
/home/incoming 000.000.00.0(rw,no_root_squash)
/home/incoming 000.000.00.0(rw,no_root_squash)

note : ip’s removed.

Client :

servername:/home/incoming on /tape type nfs (rw,soft,vers=4,addr=000.000.00.0,clientaddr=000.000.00.0)

note : ip’s removed.

ZFS on linux version check

20 January, 2016

I’m so hooked on Centos lately I have no clue how to find the version of ZFS installed on a proxmox (debian based) OS. A “hacky” way is checking dmesg during module load but I’m sure an easier solution must exist.

dmesg | grep ZFS

# if nothing, then perhaps it was not loaded (?)
modprobe zfs

result :

 dmesg | grep ZFS
[0.000000] Command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-4.2.6-1-pve root=ZFS=/ROOT/pve-1 ro boot=zfs root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
[0.000000] Kernel command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-4.2.6-1-pve root=ZFS=/ROOT/pve-1 ro boot=zfs root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
[1.592028] ZFS: Loaded module v0.6.5.3-1, ZFS pool version 5000, ZFS filesystem version 5

 

I got a server with a LSI MegaRAID 9271-8i raid controller in it, god knows the console application is bad. There is no way one can master this without at least 2 ph.D in applied cryptography or similar. You can download the console application from the Avago website. Where you need to search for :  MegaCli 5.5 P2, this zip contains MegaCli-8.07.14-1.noarch.rpm. (at time of writing) This in turn contains the application to talk to the controller (/opt/MegaRAID/MegaCli/MegaCli64).

So now that you have the console app; feel free to use these commands at your own risk. (They tend to work for me)

Creating a raid0 on every disk separate (= JBOD implementation, which one should NOT use btw, since replacing disks is a [enter bad curse word]) Where E stands for Enclosure Device ID, and S for Slot number. The is followed by the number of the raid card, in case you would have more of these … monsters. (Note: one can also use ALL)

MegaCli64 -CfgLdAdd -r0 [E:S] -a0 -NoLog

How to get info on the disks you ask ?

MegaCli64 -pdlist -a0

Save your sanity, shutdown active alarm

MegaCli64 -AdpSetProp AlarmSilence -aALL

Replace a disk : (cause a single command would be to easy)

# lets find where the disk is  (blink)
MegaCli64 -PdLocate -start -physdrv[E:S]  -a0

# stop it again
MegaCli64 -PdLocate -stop -physdrv[E:S] -a0

# now mark it as offline
MegaCli -PDOffline -PhysDrv [E:S] -a0

# no mark it as missing
MegaCli -PDMarkMissing -PhysDrv [E:S] -a0

# now make it ready for ejecting the disk (i'm not kidding)
MegaCli -PdPrpRmv -PhysDrv [E:S] -a0

# now replace the disk

Might add some other gems later, but for now this is my personal cheat sheet;

Nice reference/tools to work with this evil raid card :

I already made Lets Encrypt working with Centos 6.7 and Apache, recently I tried out Nginx, I wasn’t blasted with the speed, but I do like the way the config is made. The fact that it should be faster is icing on the cake. (some simple benchmark showed at-least a little bit better) So I switched over -I’m no expert on Nginx- and below I posted my current config for ssl on nginx (version 1.8.0)

This was part of the server block :

listen 443 ssl;

server_name  svennd.be;
ssl_certificate /etc/letsencrypt/live/svennd.be/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/svennd.be/privkey.pem;

# ssl session caching
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;

# openssl dhparam -out dhparam.pem 2048
ssl_dhparam /etc/nginx/cert/dhparam.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=15768000;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=86400;
resolver_timeout 10;

Note that I had some “tricks” to make the penalty on ssl a little bit smaller. I’m also planning on posting my full nginx config, but that still needs some more work. I’d like to explain a bit why I choice these values.

Don’t forget : Encrypt the web!

Let’s Encrypt the web, this is where my https story began!

This is one of those commands every sysadmin should read up on the manpage. I use history on a daily basis and recently I bumped onto a article with a nice find. Although I never stopped to think this was supposed to be out there : removing/filtering stuff from history. If you don’t know the command history, this is how it looks (part)

# history
  988  php -v
  989  which php
  990  ll /usr/bin/php
  991  ls -l /usr/bin/php
  992  cat /var/log/messages
  993  iptables -L
  994  exit
  995  iptables -L -n
  996  iptables-save

While generally I prefer to have close to infinite history in Linux, when you just entered :

mysql -u root -pMYFANCYDIFFICULTPASSWORD main_database

You start to think, jee I hope nobody checks my history here… or

rm -rf *

I hope nobody just copy’s this in the root… -I know its a bad habit, don’t judge-. On the other hand I type ls, ll, pwd, cd, about 15 times/minute when logged in to a console (I’m that sort of crazy). That is not really useful information to store.

So I was very happy to learn these new tricks with history command in mind. First thing to do, is increase the total amount of commands saved, cause it happened that the commands I am searching for are just out of reach. Since this is low IO and storage these days is cheap, I don’t see a point not to increase it hugely. I changed HISTSIZE and HISTFILESIZE to 1000 and 10000.

changed in ~/.bashrc

HISTFILESIZE=10000
HISTSIZE=1000

Why two variables ? In short HISTSIZE is the in memory storage per session, while HISTFILESIZE is the “long term” storage. (source)

Another handy bash variable is HISTCONTROL; In the current version of GNU Bash (history is a part of GNU Bash), 3 options are available;

  • ignorespace : when you enter a line starting with a space, its not saved in history
  • ignoredups : when a command is repeated, only the first instance is saved
  • erasedups : when a command is entered, it is checked with previous commands and those are removed. For example when running regular backups of MySQL with mysqldump only the latest mysqldump line is saved. (this makes it more of a “library”)

The ignore[space|dups] also have an alias ignoreboth.

I also added this to ~/.bashrc

HISTCONTROL=ignorespace:ignoredups:erasedups

When using history, an indication of when this command was issued could be useful. In case you wanne look what your drunken collegae did at three in the morning under root. This can be achieved with HISTTIMEFORMAT, again bash builtin variable, it internally calls a strftime() so you can setup how to visualize the date. I don’t care for seconds or years (to specific, to unspecific) So I left those out, note that you best end with a space, as it gets concatenated with the command.

HISTTIMEFORMAT="%H:%M %m/%d "

The last GNU Bash variable is HISTIGNORE, this filters out commands. I removed some that I know by heart, although ignoredups and erasedups are probably more then enough to catch most redundant commands. Its separated by colon.

HISTIGNORE="ls":"ll":"pwd":"free *"

This would filter out ls, ll (alias of ls -l) and pwd. The command free with flags would also be fully ignored (free -m, among the most known ones)

The last setting to play with is shopt this is the shell option “manager”. You wanne change histappend this will append the history to the history file, I don’t exactly know why this is not on by default. It occurred that multiple commands where gone from history, while I was certain I entered them. This might be a issue with multiple sessions overwriting one another. This might be “solved” with this. You can check for this option :

shopt | grep histappend

its either on or off, enabling or disabling can be done from console with :

# enable
shopt -s histappend

# disable
shopt -u histappend

Although I just appended it to  ~/.bashrc

That all being said, I found it a very useful find. However if you hit on the cases I described at the start of this article, removing a single line can be accomplished with :

history -d LINE_NUMBER

or if you wanne hide your tracks :

history -c

Will remove all history. (not recommended)

note : I tested this with GNU bash, version 4.1.2(1) your mileage might be different!

All together :

HISTFILESIZE=10000
HISTSIZE=1000
HISTCONTROL=ignorespace:ignoredups:erasedups
shopt -s histappend

Update : I removed HISTIGNORE, as it also ignores “ls *” if you use arrow-up key. So there is no way to find the dir you just typed. I commonly use :

ls /data/
ls /data/somedir
ls /data/somedir/with
ls /data/somedir/with/something/

Typelog : “ls /dat”, tab, arrow up, som, tab arrow up, wit tab arrow up, som tab

some nice sources / examples of history :