I got a server with a LSI MegaRAID 9271-8i raid controller in it, god knows the console application is bad. There is no way one can master this without at least 2 ph.D in applied cryptography or similar. You can download the console application from the Avago website. Where you need to search for :  MegaCli 5.5 P2, this zip contains MegaCli-8.07.14-1.noarch.rpm. (at time of writing) This in turn contains the application to talk to the controller (/opt/MegaRAID/MegaCli/MegaCli64).

So now that you have the console app; feel free to use these commands at your own risk. (They tend to work for me)

Creating a raid0 on every disk separate (= JBOD implementation, which one should NOT use btw, since replacing disks is a [enter bad curse word]) Where E stands for Enclosure Device ID, and S for Slot number. The is followed by the number of the raid card, in case you would have more of these … monsters. (Note: one can also use ALL)

MegaCli64 -CfgLdAdd -r0 [E:S] -a0 -NoLog

How to get info on the disks you ask ?

MegaCli64 -pdlist -a0

Save your sanity, shutdown active alarm

MegaCli64 -AdpSetProp AlarmSilence -aALL

Replace a disk : (cause a single command would be to easy)

# lets find where the disk is  (blink)
MegaCli64 -PdLocate -start -physdrv[E:S]  -a0

# stop it again
MegaCli64 -PdLocate -stop -physdrv[E:S] -a0

# now mark it as offline
MegaCli -PDOffline -PhysDrv [E:S] -a0

# no mark it as missing
MegaCli -PDMarkMissing -PhysDrv [E:S] -a0

# now make it ready for ejecting the disk (i'm not kidding)
MegaCli -PdPrpRmv -PhysDrv [E:S] -a0

# now replace the disk

Might add some other gems later, but for now this is my personal cheat sheet;

Nice reference/tools to work with this evil raid card :

I already made Lets Encrypt working with Centos 6.7 and Apache, recently I tried out Nginx, I wasn’t blasted with the speed, but I do like the way the config is made. The fact that it should be faster is icing on the cake. (some simple benchmark showed at-least a little bit better) So I switched over -I’m no expert on Nginx- and below I posted my current config for ssl on nginx (version 1.8.0)

This was part of the server block :

listen 443 ssl;

server_name  svennd.be;
ssl_certificate /etc/letsencrypt/live/svennd.be/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/svennd.be/privkey.pem;

# ssl session caching
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;

# openssl dhparam -out dhparam.pem 2048
ssl_dhparam /etc/nginx/cert/dhparam.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=15768000;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=86400;
resolver_timeout 10;

Note that I had some “tricks” to make the penalty on ssl a little bit smaller. I’m also planning on posting my full nginx config, but that still needs some more work. I’d like to explain a bit why I choice these values.

Don’t forget : Encrypt the web!

Let’s Encrypt the web, this is where my https story began!

This is one of those commands every sysadmin should read up on the manpage. I use history on a daily basis and recently I bumped onto a article with a nice find. Although I never stopped to think this was supposed to be out there : removing/filtering stuff from history. If you don’t know the command history, this is how it looks (part)

# history
  988  php -v
  989  which php
  990  ll /usr/bin/php
  991  ls -l /usr/bin/php
  992  cat /var/log/messages
  993  iptables -L
  994  exit
  995  iptables -L -n
  996  iptables-save

While generally I prefer to have close to infinite history in Linux, when you just entered :

mysql -u root -pMYFANCYDIFFICULTPASSWORD main_database

You start to think, jee I hope nobody checks my history here… or

rm -rf *

I hope nobody just copy’s this in the root… -I know its a bad habit, don’t judge-. On the other hand I type ls, ll, pwd, cd, about 15 times/minute when logged in to a console (I’m that sort of crazy). That is not really useful information to store.

So I was very happy to learn these new tricks with history command in mind. First thing to do, is increase the total amount of commands saved, cause it happened that the commands I am searching for are just out of reach. Since this is low IO and storage these days is cheap, I don’t see a point not to increase it hugely. I changed HISTSIZE and HISTFILESIZE to 1000 and 10000.

changed in ~/.bashrc

HISTFILESIZE=10000
HISTSIZE=1000

Why two variables ? In short HISTSIZE is the in memory storage per session, while HISTFILESIZE is the “long term” storage. (source)

Another handy bash variable is HISTCONTROL; In the current version of GNU Bash (history is a part of GNU Bash), 3 options are available;

  • ignorespace : when you enter a line starting with a space, its not saved in history
  • ignoredups : when a command is repeated, only the first instance is saved
  • erasedups : when a command is entered, it is checked with previous commands and those are removed. For example when running regular backups of MySQL with mysqldump only the latest mysqldump line is saved. (this makes it more of a “library”)

The ignore[space|dups] also have an alias ignoreboth.

I also added this to ~/.bashrc

HISTCONTROL=ignorespace:ignoredups:erasedups

When using history, an indication of when this command was issued could be useful. In case you wanne look what your drunken collegae did at three in the morning under root. This can be achieved with HISTTIMEFORMAT, again bash builtin variable, it internally calls a strftime() so you can setup how to visualize the date. I don’t care for seconds or years (to specific, to unspecific) So I left those out, note that you best end with a space, as it gets concatenated with the command.

HISTTIMEFORMAT="%H:%M %m/%d "

The last GNU Bash variable is HISTIGNORE, this filters out commands. I removed some that I know by heart, although ignoredups and erasedups are probably more then enough to catch most redundant commands. Its separated by colon.

HISTIGNORE="ls":"ll":"pwd":"free *"

This would filter out ls, ll (alias of ls -l) and pwd. The command free with flags would also be fully ignored (free -m, among the most known ones)

The last setting to play with is shopt this is the shell option “manager”. You wanne change histappend this will append the history to the history file, I don’t exactly know why this is not on by default. It occurred that multiple commands where gone from history, while I was certain I entered them. This might be a issue with multiple sessions overwriting one another. This might be “solved” with this. You can check for this option :

shopt | grep histappend

its either on or off, enabling or disabling can be done from console with :

# enable
shopt -s histappend

# disable
shopt -u histappend

Although I just appended it to  ~/.bashrc

That all being said, I found it a very useful find. However if you hit on the cases I described at the start of this article, removing a single line can be accomplished with :

history -d LINE_NUMBER

or if you wanne hide your tracks :

history -c

Will remove all history. (not recommended)

note : I tested this with GNU bash, version 4.1.2(1) your mileage might be different!

All together :

HISTFILESIZE=10000
HISTSIZE=1000
HISTCONTROL=ignorespace:ignoredups:erasedups
shopt -s histappend

Update : I removed HISTIGNORE, as it also ignores “ls *” if you use arrow-up key. So there is no way to find the dir you just typed. I commonly use :

ls /data/
ls /data/somedir
ls /data/somedir/with
ls /data/somedir/with/something/

Typelog : “ls /dat”, tab, arrow up, som, tab arrow up, wit tab arrow up, som tab

some nice sources / examples of history :

Creating your own search database, not with blackjack and most definitely not with hookers but with mlocateLocate is one of the possible search options in Linux/GNU. Locate is part of the mlocate package, this package contains locate binary. Locate is a database backed search, as such its faster then find if you search over a large amount of indexed directories. The disadvantage is that you have to first build the database in order to search it.

I believe this is done on some distro’s every night, (cron) if the machine is offline during that time the database does not get updated. The mlocate package has a tool updatedb that can be ran to update the ‘central’ database. Its however also possible to create a database for, an external drive, this could be handy if you wanne see if the file is there, while the drive itself is not mounted. (or not even in close proximity to a computer) For me, I wanted to make tapes searchable, even when they where off the machine, in a top-secret-black-site location. (aka a box behind the server)

This can be done easily with updatedb & locate.

# create the database
# the -l 0 sets the updatedb to ignore permissions 
updatedb -l 0 -U /media/external_drive -o external_drive.db

# search a file
locate -d /locate/external_drive.db IsMyImageHere

# limit the search to 10 items
locate -d /locate/external_drive.db -n 10 IsMyImageHere

# show stats on this database
locate -d /tape/external_drive.db -S
Database /tape/external_drive.db:
 92,709 directories
 7,440,981 files
 762,003,203 bytes in file names
 134,485,291 bytes used to store database

# speed showoff for database above :
time locate -d /tape/external_drive.db not_a_file

real 0m6.843s
user 0m6.782s
sys 0m0.061s

~7 seconds for a  92,709 directories, 7,440,981 files search … pretty nice 🙂

Source: Linode, DigitalOcean and VULTR Comparison

I’m currently having allot of problems with DigitalOcean, I don’t know if my setup/configuration is bad or DO is just crapping on my VPS…

time dd if=/dev/zero of=test bs=16k count=10000
10000+0 records in
10000+0 records out
163840000 bytes (164 MB) copied, 12.0611 s, 13.6 MB/s

real 0m12.093s
user 0m0.004s
sys 0m0.374s

This is not really giving allot of hope …

// update

I contacted the support and they offered -after confirmation of these results- to move to another hypervisor in the same region. Weirdly enough they asked me to shutdown the machine myself, then respond to the ticket. While I know I could have floating ip/failover/… they could have just shutdown, move and restart. Minimizing the downtime was not an option. That said, they where quick to help me out, see the timeline :

  • me> Created ticket @ 10:24
  • digitalocean> First response @ 10:58
  • My repeat test @ 11:25
  • digitalocean> second response @ 11:54
  • confirmation of move @ 12:08
  • digitalocean> move confirmed. @ 12:43

I had database backed up, and took a snapshot before confirming the move;  Cause I had to shutdown myself the downtime was ~2 hours. The sad part is, the upgrade itself only took 4 min 2 seconds and 8 seconds to boot up again. So a 5 minute “upgrade” took the site out for 2 hours.

Now rerunning this dd the lowest value was 177MB/s and the highest 646 MB/s.

time dd if=/dev/zero of=test bs=16k count=10000
10000+0 records in
10000+0 records out
163840000 bytes (164 MB) copied, 0.253633 s, 646 MB/s

real    0m0.256s
user    0m0.007s
sys     0m0.247s

That is more like a RAID5 SSD speed.

I recently switched from Apache to Nginx and also to a new server. Yeey! I copy’d /etc/letsencrypt/ over from the first server to the second. Everything seemed to be fine. Sadly, nope! For some reason it doesn’t accept certificates made on another server. Fixing it is easy once you find it :

# remake / save your config file
nano /etc/letsencrypt/cli.ini

# remove all info on letsencrypt
rm -rf /etc/letsencrypt/

# remake cert's
/opt/letsencrypt/letsencrypt-auto --config /etc/letsencrypt/cli.ini --debug certonly

# restart server might be needed
service nginx restart

Happy encrypting 🙂

With Centos 7.1 MySQL got replaced by MariaDB.  That is great, but now I also want MariaDB on Centos 6.7, because. By default its not in the repo’s (no suprise there) so you need to add the repo yourself. Good thing MariaDB has a download page telling you howto.

create /etc/yum.repos.d/MariaDB.repo

# MariaDB 10.1 CentOS repository list - created 2016-01-07 08:22 UTC
# http://mariadb.org/mariadb/repositories/
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos6-x86
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

After you get that code, it will tell you detailed info can be found here if you’re google query came on that page first, and like me you did not read the page but just copy/pasted the repo. It won’t work 😀 yum clean all will save you.

After that you can just run :

yum install MariaDB-server MariaDB-client

Also

yum install MySQL-server MySQL-client

Will resolve in MariaDB.

Have fun with Maria … db.

The interwebs is a bit unclear on this topic, but on a “clean” Centos 7.1 (tested on digitalocean) the PHP version is v5.4 , which is a bit of a bugger, since from v5.5 Zend Optimizer+ is included in the core of PHP, making PHP allot faster, about 70% faster for sure, if we can believe the benchmarks. I need APC not for the user cache but for the in memory storage of compiled PHP bytecode. So both Zend Optimizer+ or APC would do fine. Since Zend Optimizer is included in the PHP core of newer versions, Centos will someday push this (in the not so near future I guess). So I would recommend to use Zend Optimizer+, but feel free to ignore that advice, I am no expert. I will test both and decide then. Thought I doubt I will find much difference.

APC

If you need the user cache : yum install php-pecl-apcu

# Install APC
pecl install apc

#nano /etc/php.d/apc.ini
extension=apc.so
apc.enabled=1

restart httpd / php-fpm

You can validate that its running + stats by activating the apc.php file. Which can be found here (source).

Zend Optimizer+

yum install php-pecl-zendopcache

restart httpd / php-fpm

The config file is located : /etc/php.d/opcache.ini

There is no “official” statics script for Zend Opcache I could find, but a user (Rasmus Lerdorf) has made something similar.

wget https://raw.githubusercontent.com/rlerdorf/opcache-status/master/opcache.php it in some public directory to see the nice stats.

FUBAR PHP v5.4

You can also ditch PHP v5.4 from Centos repo’s and add another repo, such as webtastic repo or remirepo.

This is a project I have been working on for some time, I manage a phpBB 3.1 forum that is pulling some traffic -that is great- the problem : server is not following. I found the server (VPS) dead already a few times in the morning showing OOM errors, killing of MySQL or Httpd. Resulting in the forum being totally unreachable, bad for the site, bad for google results, bad for my rep. even worse for my honor as ad interim sysadmin.

Solving this problem can be very easy : throw in allot of cash and upgrade to larger VPS packet or dedicated server. Before I take that expensive path, I want to be sure I have squeezed everything out of the current hardware and configuration. I “know” four webservers : Apache, Nginx, lighthttpd and ISS. However, the last one is the windows one, I wanne stay far away from it (blind discrimination). Lighthttpd I know from memory, but I haven’t looked it up, and I -probably wrongly- think it focuses on dropping less-used features and being a stable simple webserver, hence the name, simple and stable is not my ballpark for this. Apache I have setup a few times for small websites, while I would not call myself experienced I am certain that I could fix most common mistakes. On Nginx I have read the story’s that it could be setup to be multiple factors faster then Apache, due to the way it works, however on most recommendations, I found that people gave the advice to use Apache, cause its simpler to setup/maintain. For the purpose of squeezing everything from setup I will go with Apache since I’m a bit experience with it and Nginx, as its large selling point is the increase in speed at a lower usage in resources.

Current stack

Currently I am running vanilla Apache. The first thing to check if something is going slow is seeing what is making it go slow, the so called bottleneck. The stack is Centos 6.7, Apache 2.2, PHP 5.6, MySQL 5.1 and phpBB 3.1. Those in itself can be updated to the latest version, though updates rarely speed up application hugely unless a bug is known. Centos has not the bleeding edge new software and that makes them a good bet on being stable. They also update the software packages so that those huge bugs would be fixed regardless of the version.  So while updating could make some difference, I doubt that is where I am going to find the golden egg.  phpBB can’t be replaced in this case. (at-least that is not the preferred route)

Testing setup

There are some good tools online to test your application under load, sadly all the nice ones aren’t free. I don’t wanne go and spend a few hundred bucks on testing, that is simply not a option.  Ideally a load tester for phpBB should register user, post topics, read topics, edit topics, … most users don’t reload 100 times a same page … Still that is the only easy test I found. The tool Apache benchmark “simply” makes concurrent connections to the webserver. There are some alternatives, such as httperf, siege, jMeter, Tsung, gor, gatling, … all these tools are probably better then ab, but sadly take allot of configuration and work to get started with, I am going to try at-least a few of them but for now, AB has proven to be able to shoot the arrow straight in at my servers ‘Achilles’. Since I want to test multiple setups, I created a small bash script that will do a serie of ab tests.

SERVER="http://server/"

# ab run function will pull ab test
function ab_run { 
  # report
  echo "run $1 / $2" 
  # ab -n requests -c concurrent_users url
  ab -n $1 -c $2 $3 > result/ab_$2_$1 2>/dev/null;
  
  # let it stabilize
  sleep 15
}

# 1000 requests/10 concurrent_users

ab_run 50 5 $SERVER
ab_run 100 10 $SERVER
ab_run 200 20 $SERVER
ab_run 300 30 $SERVER
ab_run 400 40 $SERVER
ab_run 500 50 $SERVER
echo "ab run completed"

The scripts writes output to result/ directory, this output I will later reuse. Since I ran this on a VPS with limited resources (2CPU/2GB RAM) I had to scale down to get load that would not crash the server to begin with. I don’t want to test long stability, so I took a tenfold of concurrent requests. I let the server stabilize for 15 seconds, to see some more spread in load peaks. 15 seconds is not long enough to drop to zero load, so there is a bit of overlap to be expected. This is a simple test that will put load on the server, so don’t run it in production (unless you want customers on the phone). Also note that there is a limit on concurrent request a client machine is able to produce, though these values don’t come even close to that number. (while 50 users sound little, its a good starting point) This is not a the best method, but it is a decent starting point in my opinion.

Setups

I have tested 7 setups :

  • Test 1, was a ‘base line‘ test, I pulled static file from a freshly installed LAMP stack. (the forum was installed, just not accessed) This is useful to see if the test in itself is possible to generate load and to see if everything is working. Its also a good way to check the difference between static vs dynamic pages.
  • Test 2, was default Apache setup pulling the index page of phpBB forum, I believe this is presentabele enough to other public pages of the forum.  The default value of keep-alive is off in apache, so adding the flag keep-alive won’t speed up, in fact keep-alive keeps the connection open for a timeout, but AB won’t request any other files (normal users would). But unless I shorten the timeout, I would need to add a large amount of time to the stabilizing. Like this, it is the worst case scenario setup.
  • Test 3, was changing the file cache phpBB with memcache. Setting up memcache with 1024 connections and 64 mb RAM. While some more RAM in production might be advisable for the ab test, only one page will be requested. In fact lowering the memory (since this only a 1 page test) might give a better test result. (as in, more realistic)
  • Test 4, was setup 3 with the APC module enabled for PHP, this would save the PHP bytecode in memory. I found most blogs/fora referring APC as the best/easiest way to go.  It was officially endorsed by PHP to be included in PHP 6, but seemingly the opinion has changed in favor for Zend opcache … While phpBB can also use APC as cache, I am not sure what is recommended here. APC will work no matter what phpBB decides, while memcache is only used when the applications requests it.
  • Test 5, was setup 4 with MaxClients set to 20, the idea behind that was, its better to queue clients then the forum is to go dark. (20 concurrent users over 24h would be 1.7 m requests, not 1.7m visitors!)
  • Test 6 : enough of Apache, lets hit the Nginx park. A clean setup. Lets take this as a baseline. php-fpm was used. memcache was kept with the same setup as test 3.
  • Test 7 : phpBB has an example in there repo of how to setup nginx for phpBB, I took it over and checked if I missed some low hanging fruit. The most important addition was the explicit configuration of using gzip.

Those setup form a basis for me to go into more specific setups. During the test I ran this simple bash script to get values on load/memory.

while :
do
  free -m >> memory_data
  uptime >> cpu_load
  
  tail -n 1 cpu_load
  sleep 5
done

It will print out the load values so you could stop the test if the values get to high. (they did, sorry co-vps’rs)

Pulling data from results

I put every result file in a directory load_($test_nr) getting the data to something workable are a bunch of hacks. But I share them for the future me.

# get cpu load 1 min
for i in {1..7}
do
   cat load_$i/cpu_load | cut -c45-49 > compile/cpu$i
done

# get memory useage
for i in {1..7}
do
   sed -n '2,${p;n;n}' load_$i/memory_data > compile/memory$i
done

# get failed requests
for i in {1..7}
do
  cat result_$i/ab_*_* | grep Failed | cut -c 25-40 > compile/failed$i
done
 
# total time
for i in {1..7}
do
  cat result_$i/ab_*_* | grep "Time taken for tests:" | cut -c 25-40 > compile/total_time$i
done

# requests per second
for i in {1..7}
do
  cat result_$i/ab_*_* | grep "Time taken for tests:" | cut -c 25-40 > compile/rps$i
done

# time per request accros all
for i in {1..7}
do
  cat result_$i/ab_*_* | grep "Time per request" | grep "across all concurrent req" | cut -c 25-31 > compile/tpr$i
done

Results & Discussion

note : I am no expert in this, these test have huge biases and are in no way close to how normal users would interact with your application/board. These values and results are not statistically correct. Don’t change stuff you haven’t tested. 

note 2 : While the production server has been running Centos 6.7 I took Centos 7.1 to test run.

CPU

One of the ways to see how good or bad a server is doing is checking out the CPU load, this is a value that tells you how much CPU power has been used over a period (1, 5, 15 minutes). I set out the total run time against the load out. The total run time are the count(values)*5s since I had no easy way of setting it out I did not bother, so no exact values there.

First off, these loads are crazy. (this is the 1 minute load) since this is a 2 CPU VPS, a maximum load of 2 should be the target. (roughly two virtual CPU maxed out) I left out the first test, as the maximum CPU load was 0,03. That was to be expected, requesting static files with no processing on server side should not generate load on CPU.  The clean version (test 2) has a maximum load of 39,3 and ran the longest, while using memcache as forum cache (test 3) reduced the time it required to finish, the load is approximately the same. (max test 2 : 39,8). The largest win time wise, is using APC (test 4), it speeds up PHP execution hugely, also the CPU’s are slightly less stressed,  load maximum : 27,4 probably cause the CPU doesn’t have to make bytecode from phpBB’s code. In real life examples I don’t think the effect will be this huge under those load conditions, cause APC can’t predict, it can just keep bytecode of most used files. Since I still had a load ~20 fold the amount I can be sure to claim (VPS!) I needed to be sure to pull down the CPU usage, one way I found was to lower the MaxClients this will queue users that hit the server over the limit and as such make sure the server remains stable, by default apache comes with 256, this clearly is too much for dynamic PHP pages. I took arbitrary value 20 to see the effect on CPU. The result was astonishing, the CPU load kept below 5,6. Which is still about twice too high load, but its a good indication of what I can do with this parameter. Lowering it even more would put to much visitors in queue and result in very slow experience, to large value would result in to high CPU and eventually some service would give out and break. Perhaps this parameter should be combined with keep-alive. (more testing needed)

The results with Nginx where not significantly different from Apache, the load was 24,64 without gzip and with 25,33. The load was a bit lower then Apache : 27,4 (memcache was in both used and APC was active on both). This was somehow expected, while the technology of how Apache and Nginx handle the requests differ greatly, the most powerful feature, server side caching where on both Apache and Nginx omitted. I also have close to no experience with Nginx and as such purely on these results Nginx is not a clear winner. I am however not certain APC was active on the php-fpm (v : 5.4.16) I will test this later. The reason for not using server side caching, is simple : I hit a single PHP page that can be cached in the test, but in real world scenario can’t be cached long. That being said, a microcache such as is possible in Nginx might be very useful as multiple hits on the same page could be short-live cached and so making 100 hits on a single page could be pretty much be done from a 10 second cache. However the setup of something like that would be rather difficult without good knowledge of Nginx. (I am working on it !) CPU wise some more research is needed to conclude anything. Now most errors where memory wise, out of memory errors. So it might be good to see what exactly happens with the memory during the tests.

Memory

Testing memory is not as easy, while there is a tool free that reads and parses memory usage, Linux/GNU is not as straightforward as : you still have memory/you don’t have any memory left. So for these values I used the “available” memory:  free from procps-ng 3.3.10 (version) :

Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use

ab results

The ab results gives me allot of data, but my doubt has become true. The data is not that useful. I parsed : requests per second, total run time and time per request.  I will only show a graph for the last one, as they are all pretty similar.

Showing pretty much what I expected from the CPU usage, APC is the only factor that really gives the server a boost. Making the static serving time come closer.

Conclusion & Thoughts

There is no such thing as a bad experiment or bad results, just bad use of the given data. This test was setup to give a raw idea of where to go. As such I have no experience with testing tools or testing a webserver to begin with.

  1. ab is a simple and great to use tool. But aside from hammering the server, the application below is not really tested.
  2. APC or any other tool that caches the output of PHP bytecode has a real effect both on load and speed. I am not surprised there was some voice to include it by default, but since I tried this on Centos 7. I believe it was not yet included as of now.
  3. Memcache seems to have little to no effect, I was a bit surprised in that, but  the test is the problem here. Since phpBB in itself caches on file, the move from file cache to memory cache is only limited.
  4. While this test was for phpBB, the effect on phpBB has not really been tested and further tests are needed to say anything useful, expect for the fact that APC is a real booster, at least in these tests.

While I keep on searching, feel free to comment or give advice!  All the data shown here can be accessed on google docs.

29730888 29730848          29729976       29730272

Intelligent content caching is one of the most effective ways to improve the experience for your site’s visitors. Caching, or temporarily storing content from previous requests, is part of the core content delivery strategy implemented within the HTTP

Source: Web Caching Basics: Terminology, HTTP Headers, and Caching Strategies | DigitalOcean

 

Very interesting read on web caching.