I wanted to know the power usage of a server, on newer modules this was on the webinterface; No luck on a bit older machine though.  I try’d to get to upgrade the firmware (oooo-aaaaa watch out when doing this, its dangerous) to get this information. Finding the IPMI (Intelligent Platform Management Interface) device and getting the info can be done like this :

ipmitool bmc info

The ipmitool is in most repo’s, but can be found on sourceforge (its like github but less cool).  If you are against using ipmitools, you can also use this new thing called dmidecode. (its new to me) This will give you the info on IPMI device.

dmidecode --type 38

Info for my device was :

# dmidecode 2.12
SMBIOS 2.6 present.

Handle 0x005E, DMI type 38, 18 bytes
IPMI Device Information
        Interface Type: KCS (Keyboard Control Style)
        Specification Version: 2.0
        I2C Slave Address: 0x10
        NV Storage Device: Not Present
        Base Address: 0x0000000000000CA2 (I/O)
        Register Spacing: Successive Byte Boundaries

Btw, power usage was not on the newest firmware, I tried calculating back from the UPS, but with 4 $servers and only 2 $servers_known. It was a bit to hard, I just bought a larger extra UPS. (#morepoweralwaysworks)

Interfaces to the baseboard management controller (BMC)

O, systemD.

8 February, 2016

Hell, nobody wants systemd, but its here, and unless you wanne stay old and boring its time to learn the new systemD slang.

Enable a service : (new from chkconfig apcupsd on)

systemctl enable apcupsd

start a service : (new from service apcupsd start)

systemctl start apcupsd

There, whats all the fuzz about now ?

btw apcupsd is a monitor for UPS’s. Guess who had a power-outage recently ?

Only recently I have been looking into optimization of webserver. In that search I have found some really low hanging fruit, such as keeping the byte-compiled PHP version of a file in memory, using Zend Opcache or the now deprecated APC.  I even started thinking in some very unconventional ways, such as hybrid static rendered pages/blog. Only a few days ago I saw a presentation on HTTP/2, something until now, my brain had marked this as not applicable/uninterested/ignore until further notice. Yesterday however that changed, a presentation at ma.ttias.be boosted my interest for the lower level (for me) of how the http protocol worked and why there are so many version out there. Now I am no expert and not going to be one for some time, but what hooked me is the parameter Keep-Alive.

Keep-Alive
In technical terms Keep-Alive is a method to re-use a TCP connection. When a connection is created, the client, will send some setup value (SYN) to the server, who will respond with an acknowledgment (SYN ACK). This is done every time before a clients sends its requests to the server. While this in itself is only a small amount of the time, websites these days have a ton of resources (images, js, css, …) this might add up quit some requests each having to setup a connection. While in HTTP/1, it was strictly one resource one connection -unless defined otherwise in the header-, in 1999 a finalized HTTP/1.1 saw light, the major improvement was the addition of HTTP persistent connection (aka Keep-Alive) and HTTP pipelining.  With Keep-Alive enabled on the server, a compatible client will continue to re-use the same connection for requests. This can be used to shave off some time spend in TCP connection setup.

KeepAlive on vs KeepAlive off

KeepAlive on vs KeepAlive off

How much time can I win with this ?
Any speedup is always relative and will vary depending on allot of variables, many of which are out of your control. But to give an idea, I created a page with 20 CSS files and tested an Apache setup with KeepAlive on and off. I would expect (read:prediction), a speed increase equal to the ping time for each connection that is not needed because of Keep-Alive. Modern browsers will create 6-8 connections per domain to get the resources faster, independent if Keep-Alive is on or off (that is why people shard* btw).

There are 21 files to be downloaded (Index.html + 20*CSS) That means that 7 browser connections would need to download 3 files each. So if all the resources are downloaded within the KeepAliveTimeout (I will get to that) That means that my Keep-Alive would require only 7 connections and my not Keep-Alive site 21. So in essence I win 14 TCP connections, at least if enough MaxKeepAliveRequests are available. (I also will get to that) Considering I need an extra roundtrip to the server for those 14 extra connections, a ping time would be a good indication. (I guess) Since I live rather close to my server I get ping times of around 10-20 ms, that would equal 20*14 = ~280ms of speed gain. This is a huge amount on a static page time.

* sharding is the use of multiple domains, to trick the browser in using more parallel connections. (you would recognize this with links like cdn1.svennd.be, img.svennd.be, static.svennd.be, …)

The first test shows without Keep-Alive activated.

Keep-Alive off, clearly see the connection setup before each TTFB (time to first byte)

Now lets activate the Keep-Alive;

Clearly 6 connections do create the connection, all the other use the existing connections.

Clearly 6 connections do create the connection, all the other use the existing connections.

This is google’s chrome browser, but in case you don’t have this browser here is a overview of the colors :

Keep-Alive off : 1190 ms, Keep-Alive on : 588 ms, a speedup of 602 ms ! Now to be honest, I cherry picked this example, most requests for Keep-Alive off are ~800ms and the Keep-Alive on in are ~600ms. This is close to what I was expecting, around 200 ms gain. Obviously this was on a static example, dynamic pages will be slower and most likely bigger in size. Hence the % speedup will be a lot lesser.

So what are the disadvantages ?
The disadvantage of using Keep-Alive is quite simple, it uses resources that might not be required. When a connection is open on Apache all of the resources remain ready to “answer” even when there will never be another request. This is mostly memory wise a problem. For Apache memory has always been a bit of an issue, hence popularity of Nginx and the sorts. However, most guides will advise against using Keep-Alive when memory is tight, I’m not so sure that’s the best advice to give though. KeepAlive on Apache is restricted by a few settings (read below) that can be tuned to your use profile. Hence it is even be possible with little memory to use this speedup.

Settings to tune
The settings are located in /etc/httpd/conf.d/httpd.conf, for Centos (Fedora, Red Hat,…), although its advised to use the /etc/httpd/conf.d/ directory so that newer versions of httpd (aka Apache) don’t overwrite your config. For Debian and friends (Ubuntu, Mint, …) the file can be found at /etc/apache2/apache.conf. On a second note, make a backup of the config before editing. (this goes w/o saying, but these kind of tweaks are easily forgotten)

  • KeepAlive : The main “flag”, in case you can spare the memory, I would advice to put it to on. On Debian this flag is on by default, on Centos its off by default. (might change in the future)
  • MaxKeepAliveRequests : This is the setting of how many requests to allow, note that any modern browser will have 6 connections, so divide by 6 and this is the amount of visitors that will be able to use KeepAlive (concurrent). This option can be used to limit the resources the feature KeepAlive will utilize, but I would advice against using this as the limit unless you are forced. In my opinion its better to serve lesser users with KeepAlive then none at all.
  • KeepAliveTimeout : by default the value of 15 seconds is used, to my opinion that is way, way to much. This means after the last resource has been downloaded the connection remains open for another 15 seconds (minus the time it was already open) and hence all resources remain in use. While there is an advantage to long KeepAliveTimeout (next page), I don’t think most situations require this. A personal rule of thumb is, how long does it take for an average page to loads. For example when the page loads for 5 seconds, a timeout of less then 5 would be more then enough. While a longer timeout would speed up a next page of your visitor, its uncertain that a user will a) continue on your website b) reads the page and continues within 15 seconds. The chance that a user would wanne download your CSS is allot higher. Hence I think in most situations a good balance would be to put the timeout not larger then 2-3 seconds. If the page takes longer then 2/3 seconds, don’t waste your time with this tuning, fix your application first.

When to not use Keep-Alive

  • You have to little memory and you are searching to lower memory usage.
  • You have a static html website with little to none external resources, such as javascript, images, css, …
  • You think people should be patient and wait 280ms longer.

When to use Keep-Alive

  • You have enough memory under free -m
  • You have times where there is low usage, and then you wanne server users ASAP.
  • The website was build post 2000 hence it has JavaScript, css, images, fonts, xml, …
  • Every 200ms is less waiting, is more productivity.

How to edit
This is only an example, edit as you test. File to edit (on centos) :  /etc/httpd/conf.d/httpd.conf

# put it on
KeepAlive On

# /6 this would be 20 concurrent visitors (Timeout window concurrent)
MaxKeepAliveRequests 120

# keep connections open 2 seconds
KeepAliveTimeout 2

Example configurations

You have a VPS with limited memory (512 Mb, 1Gb) and visitors don’t come in one clear peak.

KeepAlive On

# allow 50 users to use requests
MaxKeepAliveRequests 300

# keep connections open for 1 seconds 
KeepAliveTimeout 1

If you have no problems : increase KeepAliveTimeout; If you have problems : decrease MaxKeepAliveRequests.

You have a dedicated server with allot of memory (you have “free” memory under free -m) and visitors come in peaks.

KeepAlive On

# allow 50 users to use requests
MaxKeepAliveRequests 300

# keep connections open for 5 seconds 
KeepAliveTimeout 5

If you have no problems : increase MaxKeepAliveRequests, unless your website only pulls ~50 users. If you have problems decrease KeepAliveTimeout to 2-3 seconds, if it remains a problem decease MaxKeepAliveRequests.

Conclusion

Keep-Alive can help you speed up your website by quite some time, but its not a magical switch, there are consequences to using it, although I think these can be for a large part be circumvented if suited settings are used. It is interesting to see that in HTTP/2 both Keep-Alive and HTTP pipelining are on by default, in fact they are part of the protocol.

I was hit by this error way to early today … (aka before the coffee)

root@server:~# lxc-attach -n 100
lxc-attach: attach.c: lxc_attach: 710 failed to get the init pid

This is the Linux container (LXC) way of saying : the container is not running … The fix is complex, but ill share it anyway.

lxc-start -n 100

Then just go ahead and

lxc-attach -n 100

Now to wait for the coffee rush.

HTTP/2 for everyone

1 February, 2016

After reading the slides of ma.ttias.be on http/2 I feel more interesses in the topic of http/2. While the presentation is aimed at PHP developers, I think most Linux users/sysadmins can pick something up of the presentation. I’m a bit ashamed I did not already knew about this, but hey, we learn everyday. Http/2 seems to be an enhancement of http/1.1 and certainly an upgrade on http/1. While i am clearly no expert on the topic, it goes like this :

  • http/1 : header and TCP handshake are send for every request (request is : index / style.css / jquery.js / …) There was an addition of keepalive to the protocol but it is not on by default
  • http/1.1 :  keepalive is on per default unless defined otherwise, now multiple requests can happen in one handshake. Although still restricted to 6-8 connections per domain. (meaning you can download 6-8 items at a time once the page is parsed)
  • http/2 : keepalive is on by default and server can now send multiple files concurrent at once per connection line, headers are compressed saving space. Serverside pushes can happen (this is freaking awesome if working, one could send a css file while the page is not yet parsed by the client)

Be sure to check out ma.ttias.be’s his slides those are really a good explanation :

Slides: HTTP/2 for PHP developers

All (?) browsers supporting http/2 also require https, but we know where to find that : lets encrypt. Although short, might be a good read for http/2 on wikipedia, here.

short review : Cloud9

31 January, 2016

In my free time I play some games, as such I don’t have Linux on my personal computer(s), for work, I just ssh into any machine and can do whatever there. However I wanted to create some code as hobby project, and unlike other times, I didn’t wanne use SVN “sync” method. I work on 3 laptops and 1 desktop, so its useful to have one central place where my code base is. On top of that, I like to store every mini edit. I can’t commit like normal people, normal people finish one item then commit, I commit every change and try them out, even if a feature is not completed.

This makes complex commits very annoying, in SVN I just commit every change and my test server pulls all changes every second, So my develop machine and “write” machine are separate, that is great if I switch machine.

X2Go

Anyway, I set up a private repo on github and tried x2go on my desktop at work. X2Go is an alternative way of X over ssh which works super, super fast. (compared to X over ssh) Sadly there is no good code editor installed, and I tried to install atom, but that just would not start over X or X2go … (no idea why). Its also annoying that full screen in x2go doesn’t show the mouse (bug?)… so this Idea was rather quickly abandoned.

Cloud9 // c9.io

Hence I googled what ways other people attack this sort of problems… and Cloud9 came in to view.  They offer a free account, even private “project”, ssh like webconsole, -I like-. I went ahead and created an account, I could make one using github so I went to pick that option, I checked/accepted/next until I hit a login box, this did know my username, but sadly it did not let me login … >.<  After trying to reset the password and activation mail, I just made an attempt to create a new account. That did work, so to the other svennd on cloud9 I’m sorry for resetting your password.

Getting the code to cloud9

The main advantage is, once the code is there no matter where I code, I have the same base & test environment. That being said, getting it there was a bit confusion, but very easy because of the console. I have a private git repo, so I need to give my password, the initial attempt was silently failing. So using the console, I just used git pull the codebase was there directly. Opening files is easy and you even see the file tree.

Next up : running the code

Now, onto the writing and testing the code in one place. I started a MySQL entity and created my database and tables; This went as one would expect on the console although I did experience a bit of “lag” considering this was 15 seconds after registration, I found it very fast overall. (much faster then installing a container/vps) Next was firing up Apache with mod_php, this was like 1 click away. Again :  awesome. I got a link pointing me to https://c9 website. Great, lets try the app. My app worked until I tried to add stuff to the database, for some reason that did not work, highly likely my code is to blame however debugging was kinda hard as I did not find the logs of apache … This is where I got bored and jumped ship. One thing I highly appreciate in application is the possibility to remove ones code and remove the account totally.  Both where done rather fast. Crazy enough this convinced me to give cloud9 a better try with a more patience.

Some wins for cloud9

  • Speedy : from codebase to testing, in minutes. (php/mysql/apache)
  • As far as I tested : some customization options in editor (I like light colors, not black)
  • github implementation

Some missers for cloud9

  • Missing some tutorial/intro.
  • Debugging might not be so easy
  • Online only (?)

This review is based on 30 minutes of playing with Cloud9. This is rather premature!

cloud9, definitely worth looking in to.

cloud9, definitely worth looking in to.

I always wondered why there are so many dynamic websites out there with static content. Now that JavaScript has moved out of the “click/counter/funny mouse effect” field and in to a real scripting language (in case you missed it, get out of under that rock). I might explain what I mean by using this image :

custom vs dynamic

custom vs dynamic

Static pages, are (generally) html files a server just can grab and give to the client. That doesn’t mean they can’t change, but generally it means the server doesn’t have to “think” on them when the client requests them. That is the reason static is super blazing fast and its great when your website is small and static of content. If its however larger and you wanne change something, static isn’t that great, since you need to change every file you made up to that point and change them.

The larger part of the web (its a guess, I don’t have numbers) however, uses dynamic pages, those are code/scripts written in a scripting/programming language such as PHP (WordPress for example), ASP, Java, Perl, Python, Ruby, … There the process goes different, when a user requests a page, the server has to proces the page, execute it, most of the time request something from the database and then “complete” the page and give it to the client. This is, as you expected, a bit more work. Although since computers these days are fast, most of the time, nobody notices.

Until you are running on a not so new/powerful server, have allot of users, have bad code or just a combination. Exactly what hosting companies do, they put allot of clients in one box, cause the CPU’s are only used when someone visits, so if 10 websites get 1 visitor each, its lightning fast. If those 10 get a 100 visitors, suddenly its not that fast anymore. This is commonly known as shared hosting and for allot of projects that is oké, a step up from that would be a VPS, where you get a piece of hardware dedicated to you, but still you share, and hence the larger you get the slower it gets. Then there is dedicated servers, but again, at a certain point you would outgrow them too. Since a computer can only execute so many requests at a time.

Now smart engineers came up with something called caching, which generally mean, if we have to calculate 1+2 every second for 100 users, why not just keep 3 and return that ? So generally caching keeps the result of a request in memory when someone else asks, it can simply return the answer. The thing is, the first one will still have to wait for everything, only when someone else asks the same thing will it be fast, again if you have allot of content, most of the time you would have to wait. Anyway caching can happen on allot of levels and it has proven its value. That however got me to think, why do we even need dynamic pages if we are going to cache everything anyway ?

For blogs for examples, at time of creation all data and all information is known by the database and by the logic… so why would a visitor have to wait to redo that every time, while, generally one could just do all the work before a visitor get here and just return the static ? Hence come the static generators. Those utilize the power of dynamic website, but instead of doing the same thing for every visitor, they just “precompile” the website and the user only has to download static files hence, servers are relieved 🙂

Now I searched a few of these static generators, and generally I haven’t found what I was looking for, most of those that I found are black and white, they separate the dynamic from the static in that way that there is no dynamic anymore at all. Why ? I have no answer. Combine the power of the two … WordPress for example has a super powerful dashboard, a WYSIWYG editor (including a media manager,….) compared to markup commonly used for readme files used by a few of those static generators. I don’t get it. Why not put a database behind a static blog for search queries, instead of using 3the party search engines -if at all- ? Why not generate the pages when they are requested for large blogs, that way they could change design w/o remaking all the static pages, per direct (offload them). I would think that a hybrid dynamic system can combine the power of both, and in most cases is way better then static or dynamic.

In short I’m thinking of building my own blog system. A core that is dynamic and can accept comments, that get validated by askimet, then get rendered in the html page. if no html page is available redirect the request to a rendering page. (or if no content is found 404). All technology is there, I don’t see the big loophole in my plan.

Options :

  • – html page rendered during creation
  • – html page re-rendered during comments
  • – non existing html page : render at client request if not finished by cron yet
  • – non existing page : 404

In fact I think backend should be fully dynamic, but the frontend should be static html files, rendered by the backend during creation.

The hybrid blog, with all the fun of dynamic pages. #mydreamcontinues

Getting NFS to work seems a bit of a grey area for LXC … I only recently switched part of our infrastructure over to LXC. But no NFS would be definitely a no-go.

We only work in a virtualized environment because its easy for backups and to efficiently use the computational resources on each of our server. That’s the reason, security of what a container can do, is only a second to functionality. On top of that, most of these containers are not giving out a service to the outside world, the only reason they have a connection to the web is for LAN and updates. So before you use this “guide”, know that I did not look into it.

Installing NFS

I started by updating & installing nfs common’s.

# updates
yum update -y

# install nfs
yum install nfs-utils nfs-utils-lib

Next I tried to start & keep them online after reboot.

# mark them as start-during-boot
chkconfig rpcbind on
chkconfig nfs on 

# start the services
service rpcbind start
service nfs start

I received this error :

Starting NFS daemon: rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem

Which I believe means, that some of the nfs kernel modules was not loaded when the container was started. This was solved by installing nfs-kernel-server on the proxmox head.

apt-get install nfs-kernel-server

I also needed to add an exception to apparmor, I don’t know exactly how apparmor works, but it can be overruled in the lxc configuration (/etc/pve/lxc/101.conf) with : (add)

lxc.aa_profile: unconfined

After that I restarted the container in proxmox webgui. (cause I don’t know the console commands 🙂 ) I retried and the services started. Although I found that NFS was not reporting as working :

service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 1061) is running...
nfsd dead but subsys locked

However a mount from an external machine worked.

/etc/exports from lxc container

/data *(rw,sync,no_root_squash,no_subtree_check)

and a soft mount from the client (non lxc in this test)

mount -o soft,rw lxc_ip:/data /mnt/tmp

So not 100% its save, but its working 🙂

Bye OpenVZ, Hey LXC

27 January, 2016
linux-containers-logo

Linux Containers (LXC) the new OpenVZ

I recently switched 2/4 proxmox nodes to the latest build (V4). On those OpenVZ has made place for LXC. The reason why, I am not sure ’bout. I believe it was something with kernel modules and licensing, anyway it changed. Also all the commands have changed, not really good for a efficiency, now we have both systems in place, it will take some time to swap. Anyways this definite helps :

Read More

This error occurred on server with yum-cron installed; So I ruled out updates, but I was wrong. An update solved this issue, ’bout 100mb of updates to be exact… A new lesson learned : don’t blindly trust on yum-cron!

As to this error I believe it was solved in one of these packages :

Jan 26 09:23:29 Updated: 1:net-snmp-libs-5.7.2-24.el7.x86_64
Jan 26 09:23:49 Updated: 1:net-snmp-agent-libs-5.7.2-24.el7.x86_64
Jan 26 09:24:35 Updated: 1:net-snmp-5.7.2-24.el7.x86_64
Jan 26 09:24:46 Updated: 1:net-snmp-utils-5.7.2-24.el7.x86_64