A new project : tetromino

8 October, 2017

It’s been silent here for a while, but with a good reason. I been focusing my attention on some side-project. Building a tetris clone game on the PS VITA 😀 (homebrew hacked of course)

You can download the game here

I have even been featured at wololo’s, which is amazing 🙂

new look :

old look :

certificate out-of-date

8 September, 2017

That awkward moment when you write so much articles to help people automate and get certificates and then don’t correctly validate that my server is working correctly … 😀 Sorry for the downtime !

What happened is : I host 3 domains on 1 server, one of those 3 is no longer used, however the script still tries to ask a certificate, this resulted in an error, (since i run this script weekly for quit some weeks now). This results in the nginx server not taking the new certificates. That still leaves me weeks to find out, however I did not find it out. Until today, when I’m on holiday (horrible connection). On top of that I find that cloudflare’s “always on” is in fact : free advertisement for cloudflare’s broken function. (my website was not cashed at all !) and visitors could not even accept the out-of-time certificate.

Well it’s now hotfixed. Enjoy my blog once more ! And I hope I can enjoy the rest of my holidays !

While installing node.js :

Error: Package: 1:nodejs-6.11.1-1.el7.x86_64 (epel)
           Requires: libhttp_parser.so.2()(64bit)
Error: Package: 1:nodejs-6.11.1-1.el7.x86_64 (epel)
           Requires: http-parser >= 2.7.0

You can solve this by installing http_parser manually :

yum install https://kojipkgs.fedoraproject.org//packages/http-parser/2.7.1/3.el7/x86_64/http-parser-2.7.1-3.el7.x86_64.rpm

The reason behind the issue, is in that RHEL 7.4 now includes this package in the default repo, and for that reason its now removed from epel repository. The result is that there is no fight for RHEL users what version is best, but the disadvantage is that Centos users who are stuck for now on Centos 7.3, are missing a dependency for node.js until Centos 7.4 is out that is.

200th post

8 August, 2017

Yeaaaj, 100 more badly written articles and barely working guides ! Well seems I enjoyed making them and somehow something likes to read them. Some statistics to show off !

Like last time, users/bots seem to drop off during the weekend, compared to last time the effect is even easier to see. I assume part of the traffic comes from people searching for help during work hours. Long live weekends.

Overall traffic has been increasing, not really sure why there was a drop last month, neither do I really care. Obv. the last point is still new (since its only 8 days in the month)

I was not aware but clearly Thursdays are my blog day. Last time, Friday (2) and Wednesday (3) where close follow-ups. Tuesday who came up as second to last, is now my 4th most active day, and Saturday keeps being the last on the row.

On to the next 100 articles 🙂

I wanted to redo/rework the Passbolt install on Centos for a while. It’s seems like a horribly long and complex process, but in fact it’s not. With the recently released Passbolt 1.6 and my wish to play with asciinema for a while I thought why not combine both 🙂 Considering this is my first attempt, don’t shoot me ! If you prefer a readable version feel free to use my text guide version.

Read More

It’s not the first time I received this error. It is the error you receive when the file (or large input in general) you tried to upload is too large and the server is declining it. Here is how you can fix it :

Open your server {} specific config file for nginx :

nano /etc/nginx/conf.d/svennd.conf

and add, 10M is ~10MB.

client_max_body_size        10M;

Note that PHP also has an upload limit (see the docs).

Reload Nginx to activate :

service nginx reload

or 

systemctl reload nginx

note that clent_max_body_size can be in both http and server context, I prefer server as it is more specific.

I have some NFS servers (read: the main function is to store data and share it over NFS) to maintain and mostly they simply work (like everything in Linux). Most variables have been battle tested for … well forever. So rarely do you need to check on the best trade-off. Well enter the NFS thread counter.

Read More

I recently learned about tcp_sack while I certainly don’t understand every detail of this feature, its clear it should be a huge help in cases where packets (in TCP protocol) are dropped and latency is relative high. From my basic understanding, when sending TCP protocol packages, every package has a number (sequence) when tcp_sack is enabled on both client/server. Tcp_sack will be able to respond to the server which range has been dropped. When tcp_sack is not enabled, the client will only send the “last” sequentially received packet, and everything has to be resend from the last received packet.

Eg : packet 1-10 are received, packet 11-14 are lost, packet 15-35 are received; with tcp_sack : client will tell that it received packet 10 and packet 15, and hence the server can respond with packets 11-14. Without tcp_sack : client will tell that it received up till packet 10, and hence the server will have to resend packets 11-35

In all the distro’s I could get my hands on (Centos, Debian, Ubuntu, …), it was on by default! The question however, is how many packets are commonly dropped and does communication even have “high” latency ? At what cost does tcp_sack come ? I found little data about resource consumption by this feature, but since its “on-by-default” I assume its trivial. I did however find this article that claimed on a ~4MB file, with emulated connection, that tcp_sack actually made the transfer slower (above 2 min vs below 2 min with tcp_sack for a 1% loss)  That seems to defeat he purpose of tcp_sack all together.  I am not as interested in these situations, as my environment is local servers talking to each other, I don’t really care that much if they go faster or slower in packet loss situations, as its a red flag is latency or packet loss happens all together.

I copied over a random payload to check if the parameter has any influence on the time spend to transfer.

Read More

proxmoxWoohoo, Proxmox VE 5.0 has been released, this version is based on Debian 9. (Linux Kernel 4.10) It includes allot of new features, but sadly updates are still pointing to the enterprise repository for updates. This results in ugly error message when trying apt-get update such as :

W: The repository 'https://enterprise.proxmox.com/debian/pve stretch Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/stretch/pve-enterprise/binary-amd64/Packages  401  Unauthorized
E: Some index files failed to download. They have been ignored, or old ones used instead.

To fix this, its similar to V3.X and V4.X. We need to add one repository : nano /etc/apt/sources.list

add :

deb http://download.proxmox.com/debian stretch pve-no-subscription

Then disable or remove

rm -f /etc/apt/sources.list.d/pve-enterprise.list

(for disable add a # in front of the first line starting with deb)

Now you can run those updates 🙂  take note only apt-get update and apt-get dist-upgrade are supported by Proxmox !

Now I dislike the way Proxmox pushes people to take a subscription and there pricing method, but it is an amazing piece of free software! Well done Proxmox devs !

After reading cron.weekly a few weeks ago, I was intrigued by binsnitch.py, a tool that creates a baseline file with the md5/sha256/… hash of every file you wish to monitor. In case you think you have a virus, malware or cryptovirus you can verify easely what files have been changed. This is kinda fun, the sad part is, it uses Python, and requires python >= 3 which restricts the use on Centos (python 2 default). I dislike a unneeded dependency like that on my servers. So I wrote a quick and dirty alternative to it. Only requirements are bash and md5sum (or if you wish some other sum tool such as sha256sum) which I believe are common on every Linux server.

You can download & adapt it here.