When installing a 32bit application you get the weirdest problems.

/opt/firebird/bin/gsec: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory

Good thing repo’s are available to solve most of life’s problems :

yum install libstdc++.i686

(on a 64 bit machine/OS)

A small “bugfix” on proxmox 4.1 with ZFS below it, on a clean install, proxmox will fail to make a lxc (linux container). One has to create a zfs storage before containers can be made … The error during creation:

Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /var/lib/vz/images/100-disk-1.raw' failed: exit code 144

The solution :
Datacenter -> Add ZFS
– id : whatever
– zfs pool : I took rpool/ROOT but you are free in this as far as I could see
– I did not restrict nodes, although I would not use it other then local.
– Enable : on
– Thin provision : no clue, I left it unmarked.

Then during the lxc creation select lxc_storage (this is the id you gave)

// update 7/04/2017

  • When importing from console :
    pct restore $container_id container.tar.lzo -storage lxc_storage

     

Welcome readers of /var/log/messages, who have updated your server to the latest version of nfs-utils (at this writing : nfs-utils-1.3.0-0.21.el7).

# yum info nfs-utils
Name        : nfs-utils
Arch        : x86_64
Epoch       : 1
Version     : 1.3.0
Release     : 0.21.el7
Size        : 1.0 M

The full error message in /var/log/messages

Jan 21 14:00:23 mocca nfsdcltrack[]: sqlite_insert_client: insert statement prepare failed: table clients has 2 columns but 3 values were supplied

Its not a critical error, but the latest version of nfs-utils has it. Redhat already has a few bug reports on it, they even have a hidden solution. Which I despise heavily, As I gather from the bug reports, its downgrading to an earlier version of nfs-utils. This I have never done before, but I would think it is yum downgrade nfs-utils this however doesn’t do much. From what I gather from the error message, some sqlite database gets a statement that has a slightly changed table.

The nfsdcltrack tool is not a critical application on my machine, it seems to be only important when the service crashes or a reboot is done.

Anyway at this point the advice is : ignore the message. Damn Redhat on its hidden solution policy!

I got this message on a machine with 2 soft mounts.

Server :

cat /etc/exports
/home/incoming 000.000.00.0(rw,no_root_squash)
/home/incoming 000.000.00.0(rw,no_root_squash)

note : ip’s removed.

Client :

servername:/home/incoming on /tape type nfs (rw,soft,vers=4,addr=000.000.00.0,clientaddr=000.000.00.0)

note : ip’s removed.

ZFS on linux version check

20 January, 2016

I’m so hooked on Centos lately I have no clue how to find the version of ZFS installed on a proxmox (debian based) OS. A “hacky” way is checking dmesg during module load but I’m sure an easier solution must exist.

dmesg | grep ZFS

# if nothing, then perhaps it was not loaded (?)
modprobe zfs

result :

 dmesg | grep ZFS
[0.000000] Command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-4.2.6-1-pve root=ZFS=/ROOT/pve-1 ro boot=zfs root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
[0.000000] Kernel command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-4.2.6-1-pve root=ZFS=/ROOT/pve-1 ro boot=zfs root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
[1.592028] ZFS: Loaded module v0.6.5.3-1, ZFS pool version 5000, ZFS filesystem version 5

 

I got a server with a LSI MegaRAID 9271-8i raid controller in it, god knows the console application is bad. There is no way one can master this without at least 2 ph.D in applied cryptography or similar. You can download the console application from the Avago website. Where you need to search for :  MegaCli 5.5 P2, this zip contains MegaCli-8.07.14-1.noarch.rpm. (at time of writing) This in turn contains the application to talk to the controller (/opt/MegaRAID/MegaCli/MegaCli64).

So now that you have the console app; feel free to use these commands at your own risk. (They tend to work for me)

Creating a raid0 on every disk separate (= JBOD implementation, which one should NOT use btw, since replacing disks is a [enter bad curse word]) Where E stands for Enclosure Device ID, and S for Slot number. The is followed by the number of the raid card, in case you would have more of these … monsters. (Note: one can also use ALL)

MegaCli64 -CfgLdAdd -r0 [E:S] -a0 -NoLog

How to get info on the disks you ask ?

MegaCli64 -pdlist -a0

Save your sanity, shutdown active alarm

MegaCli64 -AdpSetProp AlarmSilence -aALL

Replace a disk : (cause a single command would be to easy)

# lets find where the disk is  (blink)
MegaCli64 -PdLocate -start -physdrv[E:S]  -a0

# stop it again
MegaCli64 -PdLocate -stop -physdrv[E:S] -a0

# now mark it as offline
MegaCli -PDOffline -PhysDrv [E:S] -a0

# no mark it as missing
MegaCli -PDMarkMissing -PhysDrv [E:S] -a0

# now make it ready for ejecting the disk (i'm not kidding)
MegaCli -PdPrpRmv -PhysDrv [E:S] -a0

# now replace the disk

Might add some other gems later, but for now this is my personal cheat sheet;

Nice reference/tools to work with this evil raid card :

I already made Lets Encrypt working with Centos 6.7 and Apache, recently I tried out Nginx, I wasn’t blasted with the speed, but I do like the way the config is made. The fact that it should be faster is icing on the cake. (some simple benchmark showed at-least a little bit better) So I switched over -I’m no expert on Nginx- and below I posted my current config for ssl on nginx (version 1.8.0)

This was part of the server block :

listen 443 ssl;

server_name  svennd.be;
ssl_certificate /etc/letsencrypt/live/svennd.be/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/svennd.be/privkey.pem;

# ssl session caching
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;

# openssl dhparam -out dhparam.pem 2048
ssl_dhparam /etc/nginx/cert/dhparam.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=15768000;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=86400;
resolver_timeout 10;

Note that I had some “tricks” to make the penalty on ssl a little bit smaller. I’m also planning on posting my full nginx config, but that still needs some more work. I’d like to explain a bit why I choice these values.

Don’t forget : Encrypt the web!

Let’s Encrypt the web, this is where my https story began!

This is one of those commands every sysadmin should read up on the manpage. I use history on a daily basis and recently I bumped onto a article with a nice find. Although I never stopped to think this was supposed to be out there : removing/filtering stuff from history. If you don’t know the command history, this is how it looks (part)

# history
  988  php -v
  989  which php
  990  ll /usr/bin/php
  991  ls -l /usr/bin/php
  992  cat /var/log/messages
  993  iptables -L
  994  exit
  995  iptables -L -n
  996  iptables-save

While generally I prefer to have close to infinite history in Linux, when you just entered :

mysql -u root -pMYFANCYDIFFICULTPASSWORD main_database

You start to think, jee I hope nobody checks my history here… or

rm -rf *

I hope nobody just copy’s this in the root… -I know its a bad habit, don’t judge-. On the other hand I type ls, ll, pwd, cd, about 15 times/minute when logged in to a console (I’m that sort of crazy). That is not really useful information to store.

So I was very happy to learn these new tricks with history command in mind. First thing to do, is increase the total amount of commands saved, cause it happened that the commands I am searching for are just out of reach. Since this is low IO and storage these days is cheap, I don’t see a point not to increase it hugely. I changed HISTSIZE and HISTFILESIZE to 1000 and 10000.

changed in ~/.bashrc

HISTFILESIZE=10000
HISTSIZE=1000

Why two variables ? In short HISTSIZE is the in memory storage per session, while HISTFILESIZE is the “long term” storage. (source)

Another handy bash variable is HISTCONTROL; In the current version of GNU Bash (history is a part of GNU Bash), 3 options are available;

  • ignorespace : when you enter a line starting with a space, its not saved in history
  • ignoredups : when a command is repeated, only the first instance is saved
  • erasedups : when a command is entered, it is checked with previous commands and those are removed. For example when running regular backups of MySQL with mysqldump only the latest mysqldump line is saved. (this makes it more of a “library”)

The ignore[space|dups] also have an alias ignoreboth.

I also added this to ~/.bashrc

HISTCONTROL=ignorespace:ignoredups:erasedups

When using history, an indication of when this command was issued could be useful. In case you wanne look what your drunken collegae did at three in the morning under root. This can be achieved with HISTTIMEFORMAT, again bash builtin variable, it internally calls a strftime() so you can setup how to visualize the date. I don’t care for seconds or years (to specific, to unspecific) So I left those out, note that you best end with a space, as it gets concatenated with the command.

HISTTIMEFORMAT="%H:%M %m/%d "

The last GNU Bash variable is HISTIGNORE, this filters out commands. I removed some that I know by heart, although ignoredups and erasedups are probably more then enough to catch most redundant commands. Its separated by colon.

HISTIGNORE="ls":"ll":"pwd":"free *"

This would filter out ls, ll (alias of ls -l) and pwd. The command free with flags would also be fully ignored (free -m, among the most known ones)

The last setting to play with is shopt this is the shell option “manager”. You wanne change histappend this will append the history to the history file, I don’t exactly know why this is not on by default. It occurred that multiple commands where gone from history, while I was certain I entered them. This might be a issue with multiple sessions overwriting one another. This might be “solved” with this. You can check for this option :

shopt | grep histappend

its either on or off, enabling or disabling can be done from console with :

# enable
shopt -s histappend

# disable
shopt -u histappend

Although I just appended it to  ~/.bashrc

That all being said, I found it a very useful find. However if you hit on the cases I described at the start of this article, removing a single line can be accomplished with :

history -d LINE_NUMBER

or if you wanne hide your tracks :

history -c

Will remove all history. (not recommended)

note : I tested this with GNU bash, version 4.1.2(1) your mileage might be different!

All together :

HISTFILESIZE=10000
HISTSIZE=1000
HISTCONTROL=ignorespace:ignoredups:erasedups
shopt -s histappend

Update : I removed HISTIGNORE, as it also ignores “ls *” if you use arrow-up key. So there is no way to find the dir you just typed. I commonly use :

ls /data/
ls /data/somedir
ls /data/somedir/with
ls /data/somedir/with/something/

Typelog : “ls /dat”, tab, arrow up, som, tab arrow up, wit tab arrow up, som tab

some nice sources / examples of history :

Creating your own search database, not with blackjack and most definitely not with hookers but with mlocateLocate is one of the possible search options in Linux/GNU. Locate is part of the mlocate package, this package contains locate binary. Locate is a database backed search, as such its faster then find if you search over a large amount of indexed directories. The disadvantage is that you have to first build the database in order to search it.

I believe this is done on some distro’s every night, (cron) if the machine is offline during that time the database does not get updated. The mlocate package has a tool updatedb that can be ran to update the ‘central’ database. Its however also possible to create a database for, an external drive, this could be handy if you wanne see if the file is there, while the drive itself is not mounted. (or not even in close proximity to a computer) For me, I wanted to make tapes searchable, even when they where off the machine, in a top-secret-black-site location. (aka a box behind the server)

This can be done easily with updatedb & locate.

# create the database
# the -l 0 sets the updatedb to ignore permissions 
updatedb -l 0 -U /media/external_drive -o external_drive.db

# search a file
locate -d /locate/external_drive.db IsMyImageHere

# limit the search to 10 items
locate -d /locate/external_drive.db -n 10 IsMyImageHere

# show stats on this database
locate -d /tape/external_drive.db -S
Database /tape/external_drive.db:
 92,709 directories
 7,440,981 files
 762,003,203 bytes in file names
 134,485,291 bytes used to store database

# speed showoff for database above :
time locate -d /tape/external_drive.db not_a_file

real 0m6.843s
user 0m6.782s
sys 0m0.061s

~7 seconds for a  92,709 directories, 7,440,981 files search … pretty nice 🙂

Source: Linode, DigitalOcean and VULTR Comparison

I’m currently having allot of problems with DigitalOcean, I don’t know if my setup/configuration is bad or DO is just crapping on my VPS…

time dd if=/dev/zero of=test bs=16k count=10000
10000+0 records in
10000+0 records out
163840000 bytes (164 MB) copied, 12.0611 s, 13.6 MB/s

real 0m12.093s
user 0m0.004s
sys 0m0.374s

This is not really giving allot of hope …

// update

I contacted the support and they offered -after confirmation of these results- to move to another hypervisor in the same region. Weirdly enough they asked me to shutdown the machine myself, then respond to the ticket. While I know I could have floating ip/failover/… they could have just shutdown, move and restart. Minimizing the downtime was not an option. That said, they where quick to help me out, see the timeline :

  • me> Created ticket @ 10:24
  • digitalocean> First response @ 10:58
  • My repeat test @ 11:25
  • digitalocean> second response @ 11:54
  • confirmation of move @ 12:08
  • digitalocean> move confirmed. @ 12:43

I had database backed up, and took a snapshot before confirming the move;  Cause I had to shutdown myself the downtime was ~2 hours. The sad part is, the upgrade itself only took 4 min 2 seconds and 8 seconds to boot up again. So a 5 minute “upgrade” took the site out for 2 hours.

Now rerunning this dd the lowest value was 177MB/s and the highest 646 MB/s.

time dd if=/dev/zero of=test bs=16k count=10000
10000+0 records in
10000+0 records out
163840000 bytes (164 MB) copied, 0.253633 s, 646 MB/s

real    0m0.256s
user    0m0.007s
sys     0m0.247s

That is more like a RAID5 SSD speed.

I recently switched from Apache to Nginx and also to a new server. Yeey! I copy’d /etc/letsencrypt/ over from the first server to the second. Everything seemed to be fine. Sadly, nope! For some reason it doesn’t accept certificates made on another server. Fixing it is easy once you find it :

# remake / save your config file
nano /etc/letsencrypt/cli.ini

# remove all info on letsencrypt
rm -rf /etc/letsencrypt/

# remake cert's
/opt/letsencrypt/letsencrypt-auto --config /etc/letsencrypt/cli.ini --debug certonly

# restart server might be needed
service nginx restart

Happy encrypting 🙂