Sometimes you want to have newer software on Centos/Red Hat. But its because that slower adoption, we can actually use Centos for so long, and the support they deliver is amazing. That being said, sometimes there is no getting back and you need newer software. Python 3X is such a package. So let’s go ahead and install Python 3.6 using iuscommunity, (Inline with Upstream Stable)

Some developmental tools to compile software from source code :

yum install yum-utils groupinstall development

Add the repository to our pool, then update the repos :

yum install https://centos7.iuscommunity.org/ius-release.rpm
yum update

and install python 3.6

yum install python36u python36u-pip

This adds python 3.6.4 at this time; which can be run :

python3.6

This can be used to install useful tools such as NanoPlot 🙂

pip3.6 install NanoPlot

That’s it 🙂

UUID of hard disk

19 April, 2018

If you ever have been bitten by the changing letters from Linux hard drives, then stop the crying, UUID is here to save the day. Definitely if you fubar’d your installation for the second time and had to reinstall once again, this is a useful tip. Hard disks and virtual raid disks have a UUID, Universally Unique IDentifier.

You can use them to access those drives, in /etc/fstab instead of /dev/sd* names;

You can use the UUID by poking the disk the old fasion way :

blkid /dev/sda1

This returns :

[root@server~]# blkid /dev/sda1
/dev/sda1: UUID="696b8eba-f884-4a0a-8f55-b90df6c56b50" TYPE="xfs" PARTLABEL="primary" PARTUUID="6d0f4e74-2929-4a81-9821-dc318013265c"

This can be added in /etc/fstab instead of /dev/sda1, which can change if you remove or add another hard disk (it can even change between reboots)

/dev/sda1 /data                 xfs defaults 0 0

Can become :

UUID=696b8eba-f884-4a0a-8f55-b90df6c56b50 /data                 xfs defaults 0 0

This works, irrelevant if you change hardware or not. (well not when the disk gets replaced!)

note : beside blkid UUID can also be found in the directory structure by /dev/disk/by-uuid I am however uncertain if this gets populated in less good circumstances (such as grub/recovery/emergency boots)

[root@server ~]# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx. 1 root root 10 Apr 19 11:06 265fc00c-2192-46c0-9289-5fc87221d775 -> ../../sdb5
lrwxrwxrwx. 1 root root 10 Apr 19 11:06 2fae283e-98ec-48ea-88c0-226f8c245900 -> ../../sdb4
lrwxrwxrwx. 1 root root 10 Apr 19 11:06 64D3-DD1F -> ../../sdb1
lrwxrwxrwx. 1 root root 10 Apr 19 11:06 696b8eba-f884-4a0a-8f55-b90df6c56b50 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Apr 19 11:06 a2836a81-333a-4c3a-a383-c16e56a4350f -> ../../sdb2
lrwxrwxrwx. 1 root root 10 Apr 19 11:06 c81ca887-d710-4bb4-ab3f-eb38517eadff -> ../../sdb3

 

I have been playing with setting up a Rocks 7 cluster, our compute nodes have 3 disk slots. One should be used for the system, and the other two can be used in a RAID 0, which provides faster read/writes and no redundancy or safety. But for a cluster where data is never “stored” that is fine.

Configuration

To get a custom partition, one needs to copy a file custom-partition.xml to the site-profile directory :

cp /export/rocks/install/rocks-dist/x86_64/build/nodes/custom-partition.xml /export/rocks/install/site-profiles/7.0/nodes/replace-custom-partition.xml

Then the content needs to be adapted. This protocol is based on Red Hat kickstart. I used this config :

<?xml version="1.0" standalone="no"?>
<kickstart roll="base">
<!-- Custom Partitioning Node -->
<pre>
<!-- clean 3 disks, 50gb root, 10gb swap, 10gb /var, raid0 /state/partition1  -->

echo "clearpart --all --initlabel --drives=sda,sdb,sdc
part / --size 50000 --ondisk sda
part swap --size 20000 --ondisk sda
part /var --size 10000 --ondisk sda
part raid.00 --size 1 --grow --ondisk sdb
part raid.01 --size 1 --grow --ondisk sdc
raid /tmp --level=0 --device=md0 raid.00 raid.01" &gt; /tmp/user_partition_info

</pre>
</kickstart>

Beside the xml tags, important are :

clearpart --all --initlabel --drives=sda,sdb,sdc

This initializes the disks, drives should be available under /dev/sd*

part / --size 50000 --ondisk sda

For the root I take a 50GB partition, on the primary disk (system disk)

part raid.00 --size 1 --grow --ondisk sdb

This is where the raid disk get configured, I set it to the maximal size of the disk, be sure to change the –ondisk parameter.

raid /tmp --level=0 --device=md0 raid.00 raid.01

Here you define the raid, if you want to be allot more safer, level=1 would also help with read speed, although at the cost of 50% storage.

&gt; /tmp/user_partition_info

important to note is, that “>” should be encoded otherwise it won’t work. As described in the documentation.

Force reinstall

If like me your cluster is already installed, you are going to want to force this setup on the nodes. Anyways, even if you still need to install the nodes you need to push this configuration into your active distribution you are sending over PXE. This can be done :

cd /export/rocks/install
rocks create distro

In case you already installed some nodes, you need to remove .rocks-release file from each first partition. This can be done using the provided script : (/opt/rocks/nukeit.sh)

for file in $(mount | awk '{print $3}')
do
  if [ -f $file/.rocks-release ]
  then
    rm -f $file/.rocks-release
  fi
done

Then run it : (for compute-0-0!)

ssh compute-0-0 'sh /opt/rocks/nukeit.sh'

Once that is done, you can set the database to remove partition table and reinstall upon next boot.

rocks remove host partition compute-0-0
rocks set host boot action=install compute-0-0

To finish off with a reboot, which in turn will initiate the reinstall.

ssh compute-0-0 'reboot -h now'

Happy computing !

 

I’m writing a “fun” script, and one of the fun commands it executes is Asciiquarium. This is my guide on how I got it working in Centos 6 🙂

First the dependecies :

yum install perl-Curses perl-ExtUtils-MakeMaker

If you miss one, you will get his error :

Can't locate ExtUtils/MakeMaker.pm in @INC

First we need Term::Animation, this can be installed using by downloading and making it locally. Note that make test, will fail due to missing Test::More in Centos, that’s not a big deal, and it will work without it as well.

wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz
tar -zxvf Term-Animation-2.4.tar.gz
cd Term-Animation-2.4/
perl Makefile.PL && make
make install

After we got that installed, we can download ASCIIQuarium and install it :

wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz
tar -zxvf asciiquarium.tar.gz
cd asciiquarium_1.0/
cp asciiquarium /usr/games/
chmod 755 /usr/games/asciiquarium

And there we go running /usr/games/asciiquarium will give you a nice fish aquarium onscreen 🙂

ASCIIQuarium in action

gem: Command not found

19 January, 2018

While installing a ruby package I hit the error :

gem install --version 1.15.4 bundler
make: gem: Command not found
make: *** [setup] Error 127

This can be resolved by installing the package :

yum -y install rubygems-devel

or for the Debian friends

apt-get install rubygems

Happy building 🙂

With the latest stable Rocket Chat (0.60.3) on Centos 7 machine I got this error :

# node main.js
/opt/Rocket.Chat/programs/server/boot.js:50
    const { pause } = require("./debug.js");
          ^

SyntaxError: Unexpected token {
    at exports.runInThisContext (vm.js:53:16)
    at Module._compile (module.js:373:25)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/opt/Rocket.Chat/main.js:4:1)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)

This can be fixed by installing a few dependency (magic)

npm install -g inherits n

Then it seems to be recommended to run node.js 8.9.3 :

n 8.9.3

After that I got greeted with a well-known issue on Centos … the older GLIBC librarys :

[root@rocket Rocket.Chat]# node main.js
module.js:664
  return process.dlopen(module, path._makeLong(filename));
                 ^

Error: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /opt/Rocket.Chat/programs/server/node_modules/fibers/bin/linux-x64-57/fibers.node)
    at Object.Module._extensions..node (module.js:664:18)
    at Module.load (module.js:554:32)
    at tryModuleLoad (module.js:497:12)
    at Function.Module._load (module.js:489:3)
    at Module.require (module.js:579:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (/opt/Rocket.Chat/programs/server/node_modules/fibers/fibers.js:24:38)
    at Module._compile (module.js:635:30)
    at Object.Module._extensions..js (module.js:646:10)
    at Module.load (module.js:554:32)

Luckily in this case there is a quick and easy workaround, install some build apps in case they are missing :

yum install g++ build-essential

If these are not available you can try :

yum install gcc gcc-c++ make openssl-devel

rebuild node-gyp :

npm install -g node-gyp
cd /opt/Rocket.Chat/programs/server/node_modules/fibers/
node-gyp rebuild
cp -f build/Release/fibers.node bin/linux-x64-57/fibers.node

And rocketchat will boot up again 😀 jippie !

systemctl start rocketchat.service

Happy rocketing ! 🙂

Thanks to stally!

Avoid rsyncing thumb.db

9 January, 2018

If your sync is syncing a bunch of thumb.db files from windows cause someone opened the directory, you can easely remove it from the sync :

rsync -avhn --exclude 'Thumbs.db' source destination

note : I added -n for copy/paste mistakes 🙂

if you got plenty of file types/extensions you want ignored/excluded use :

rsync -avhn --exclude-from '/opt/exclude_rsync_list.txt' source destination

an example of this file could be :

.DS_Store
Thumbs.db

happy syncing.

Suddenly a Linux server, -only- serving as ‘open’ samba share (guest account allowed) stopt working. I logged in and found samba working, no weird network issues, nothing. A mystery !

After a service smb restart and a reboot -h now (sue me, uptime) I increased the log level of smb to level 3 this is done by changing /etc/samba/smb.conf :

[global]
        workgroup = SAMBA
        security = user
        passdb backend = tdbsam

        [...]

        map to guest = Bad User
        log level = 3

        [... below come shares ...]
[data]
  path = /data
  [...]
  force user = testuser
  guest ok = yes

You can then follow what samba is doing at /var/log/samba/log.smbd (for Centos, after restarting the service) This is what I found :

[2017/12/05 11:10:37.388846,  3] ../source3/auth/auth.c:178(auth_check_ntlm_password)
  check_ntlm_password:  Checking password for unmapped user [DESKTOP-XXXXXX]\[svennsvenndDESKTOP-XXXXXX] with the new password interface
[2017/12/05 11:10:37.388886,  3] ../source3/auth/auth.c:181(auth_check_ntlm_password)
  check_ntlm_password:  mapped user is: [TEMPSTORAGE]\[svennd]@[DESKTOP-XXXXXX]
[2017/12/05 11:10:37.388985,  3] ../source3/auth/check_samsec.c:399(check_sam_security)
  check_sam_security: Couldn't find user 'svennd' in passdb.
[2017/12/05 11:10:37.389029,  2] ../source3/auth/auth.c:315(auth_check_ntlm_password)
  check_ntlm_password:  Authentication for user [svennd] -> [svennd] FAILED with error NT_STATUS_NO_SUCH_USER
[2017/12/05 11:10:37.389077,  3] ../source3/auth/auth_util.c:1610(do_map_to_guest_server_info)
  No such user svennd [DESKTOP-XXXXXX] - using guest account
[2017/12/05 11:10:37.390215,  3] ../source3/smbd/server_exit.c:246(exit_server_common)
  Server exit (NT_STATUS_CONNECTION_RESET)

So in short windows tries to use my local account and fails, this is expected. Then samba gives me the permissions of a guest account. Weirdly enough after that samba reports NT_STATUS_CONNECTION_RESET, or more simply put “server exit”. I tried to find more info on the recent patches using :

rpm -q --changelog samba-common-4.6.2-12.el7_4.noarch | less

At this time of writing the latest “feature change” was way back in march, this installation was newer so that could hardly be the issue.

* Fri Mar 31 2017 Guenther Deschner <gdeschner@redhat.com> - 4.6.2-0
- Update to Samba 4.6.2

In the end, we did not find the issue in the samba server. The issue was the client, running a windows 10 up-to-date version (fall creators update). This was the only change, between a working and not working setup. So Windows must have changed some behavior ? Tested with a Windows 7 machine, this suspicion was confirmed, there it worked. A workaround for my case was to setup a username that the server does know on the client :

C:\Users\svenn>net view \\shareserver
System error 53 has occurred.

The network path was not found.

C:\Users\svenn>net use \\shareserver\data /user:testuser
The command completed successfully.


C:\Users\svenn>net view \\shareserver
Shared resources at \\shareserver

Samba 4.6.2

Share name  Type  Used as  Comment

-------------------------------------------------------------------------------
data        Disk  (UNC)    storage
data_rgb    Disk           storage
The command completed successfully.

And after that I can browse and access the share through explorer. While this for sure is not foolproof, for my case its enough. (single client to server)

This register setting in windows seems to be affecting this issue, thanks Dominik!

A new project : tetromino

8 October, 2017

It’s been silent here for a while, but with a good reason. I been focusing my attention on some side-project. Building a tetris clone game on the PS VITA 😀 (homebrew hacked of course)

You can download the game here

I have even been featured at wololo’s, which is amazing 🙂

new look :

old look :

certificate out-of-date

8 September, 2017

That awkward moment when you write so much articles to help people automate and get certificates and then don’t correctly validate that my server is working correctly … 😀 Sorry for the downtime !

What happened is : I host 3 domains on 1 server, one of those 3 is no longer used, however the script still tries to ask a certificate, this resulted in an error, (since i run this script weekly for quit some weeks now). This results in the nginx server not taking the new certificates. That still leaves me weeks to find out, however I did not find it out. Until today, when I’m on holiday (horrible connection). On top of that I find that cloudflare’s “always on” is in fact : free advertisement for cloudflare’s broken function. (my website was not cashed at all !) and visitors could not even accept the out-of-time certificate.

Well it’s now hotfixed. Enjoy my blog once more ! And I hope I can enjoy the rest of my holidays !