# Rm: Argument list too long
Published on 13-07-2018


There is a Dutch saying: "Bij de loodgieter lekt de kraan", which literally means "The tap is leaking at the plumbers home"

My security camera records pictures and video's into my NAS server. One in a while I have to truncate the directory because it's not accessible anymore through CIFS. Within 3 months there are more than 100k files. I do this manually because I'm lazy.

Usually you will go away with just executing the rm command. But with a huge number of files you get this message:

root@nas:/mnt/Data # rm -rf dvr/*
/bin/rm: Argument list too long.

The reason you get this message is because it's a kernel limitation. You can find out the maximum number of arguments that your kernel supports with this command:

getconf ARG_MAX

The solution

Use a for loop within the current directory:

for f in *.png; do rm "$f"; done

Use find:

#To delete .png files within the current directory
find . -name ".png" -exec rm {} +

#Delete everything within the current directory
find . -name ".png" -exec rm {} +

Source: Stackoverflow


If you were familiair with Eclipse you know that when you would save java class it will build automatically. With IntelliJ you don't have to spam the CTRL + S combination anymore because it will save automatically but there is only one problem. Your class will not be compiled.
Some developers prefer to compile manually because of the performance of the IDE but it can trick you into weird behavior if you forget it once. Especially when developing microservices where you are running 5 instances of IntelliJ.

IntelliJ has the option to build project automatically, BUT it only works while not running / debugging. How sad that is.

The problem I was facing

I was developing a Spring Boot application using Spring Boot Developer Tools which comes with a neat feature where classes are being reloaded. The dev tools only restarts the ServletContext within seconds. The main problem was that it only happens when the class is being compiled. So in Intellij you had to trigger a build yourself while in Eclipse when you would save your class, it will compile automatically even when running.

In IntelliJ it is possible to override this behavior to build automatically even when the application is running.

The setting we want to override is this:


1. Open the registry

2. Look for and enable it

Keep the description in your mind in case you get into weird behaviour.

Close and now you are done. 





# Axis TCPMon plugin for IntelliJ
Published on 12-07-2018

TCPMon is a good TCP/IP tool to intercept the traffic that is being passed and shows the payload of the traffic. Sometimes when you are passing a cookie, or some other meta data you want to be sure if everything is being passed correctly. The tool also allows to simulate a slow connection. Which is ideal to reproduce timeout and/or performance issues.

TCPMon is not supported anymore but it is still one of the most populair tools out here. If you have a better alternative, feel free to mention them.

You can install TCPMon in IntelliJ as a plugin and it will appear in your toolbar.


  1. Configure the port you want to listen
  2. Configure the target hostname and url
  3. Click Add

Now when you make requests to port 8081, it will redirect to You will see the following output

# Refreshing the knowledge!
Published on 28-04-2018

As a software engineer deal withlot of programming languages and frameworks, but how the hell are you maintaining to remember everything you worked with? They key is practicing and making notes. Are you actually doing that? Of course you do, but I noticed I was lacking with it.  During my work I'm working fulltime on projects where I mostly of the time write in Java. The idea is to  start to create small applications in various languages to refresh my knowledge.


The plan

#1 Python

Create a small blog application with python using the flask framework.

Repo: python-blog


During the development I will add more steps.

On proxmox I was getting this error when connecting through SSH

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

Has to do with some locale that can't be loaded.


Remove/Comment this line in /etc/ssh/sshd_config:

AcceptEnv LANG LC_*

Restart SSHD

Yep, Today was my day. Sometimes I forget to load my private key to connect with the server and I get rejected. This accumulates the authentication errors and ta da, I get locked out after the 3rd time. Denyhosts keeps track of the number of authentication attempts and adds the IP on the /etc/hosts.deny file.

DenyHosts is a script intended to be run by Linux system administrators to help thwart SSH server attacks (also known as dictionary based attacks and brute force attacks).
Bot's are scanning the internet and trying to make attempts to login. Take a look at your auth log file and you will see many attempts, with many I mean  many PER SECOND!

When I take a look at my /etc/hosts.deny file I see my own IP.  After removing my IP it will be put back with the next login attempt. What the hell?

Denyhosts stores the IP addresses also in these files listed below. Make sure you check them all and remove when necessary. 

But first stop denyhosts using this command

systemctl stop denyhosts.service

systemctl start denyhosts.service


Always disable password authentication in SSHD. Instead use public key authentication. It's more convenient and more secure. But be careful with your private key.

I was trying to get a readable output of a stored procedure in Oracle SqlDeveloper but the lines were truncated after 90 characters. The tool didn't had the option to change the linesize for the script output.

It was faster for me to write this in Java to get the readable output of the query.

This code will print the column names with the record values.

      ResultSet rs = getResultSet()
      while ( {
        int columnCount = rs.getMetaData().getColumnCount();
        List<String> results = new ArrayList<>();
		//Columnindex is one based.
        for (int i = 1; i <= columnCount; i++) {
          String columnName = rs.getMetaData().getColumnName(i);
          String value = rs.getString(i);
          results.add(columnName + "=" + value);


# SVN commit deleted files
Published on 12-03-2017

Command to prepare svn to add your deleted files to commit:

svn st | grep ^! | awk '{$1=""; print " --force \""substr($0,2)"@\"" }' | xargs svn rm


# This is where the magic happens
Published on 12-02-2017

Yesterday I went to Evoswitch to find out why my other server was not reachable anymore. I tried to send a ACPI reboot signal from remote but the server didn't respond to it.  I had that moment again that I could smash my head to my desk.

I have the following servers:
#Server 001
Dell R210 from 2010

#Server 002
Dell R310 from 2013


When I installed the R310, I migrated completely from the R210. I also switched the IP address because of my mail server reputation. It's hard to build one up so it would be a waste not to use my old IP. So the R310 is configured as Server001 and the R210 is running as Server 002.  I thought I was done but forgot 1 thing....THE POWER CABLES!.

If I send a reboot/shutdown signal from remote to Server 002(R210), the R310 was shutting down! FUCK!


Replacing the R210

I replaced my R210 with a R410 which I acquired from work with the following specs.
2x Intel Xeon E5640
12GB DDR 3 Ram
2x 500 GB
Dell PERC H700
2x Western Digital RE 500GB

I installed Fedora 25 with Docker to try some cool things out with Docker. It's cooler to  run it on your own colocated server than in a VirtualBox on your workstation :D.
The nice thing about Fedora is that it runs on the newest packages so I don't have to mess around with 3rd party respo's which were required on CentOS. Running on bleeding edge release can have it's benefits or can even turn out very badly. But you can tell the same about the dependency hell you get by using 3rd party repositories.


The shared rack

Like you see it's a fucking mess. No not my hair, but the servers I'm talking about the servers hanging on 2 screws without rack rails. I mean this server could  "crash" literally from the rack down.

 You come to see things like Sitecom consumer switches connected to some PowerEdge/HP servers

It's running good now. Let's try the cool stuff now. Running on 16 cores baby!


When showing a DialogFragment the title is not visible when opening it on a small resolution or on a smartphone. This is the default behavior/styling of Android to preserve space for the content. In some cases you don' t want to hide the title because the user could miss the context of the dialog. 

For example you have a dialog with a EditText and a submit button. It would be shown like this:

Well how the heck do I know what to fill in here?

Let' s show that title

Open styles.xml and add this style:

<style name="DialogWithTitle" parent="@style/Theme.AppCompat.Light.Dialog">
	<item name="android:windowNoTitle">false</item>

The naming of the android:windowNoTitle property is obscure. So if I would like to show the title, you to set it to false

In your DialogFragment class you have to set the style:

	public static AlertTextDialogFragment newInstance() {
		AlertTextDialogFragment fragment = new AlertTextDialogFragment();
		Bundle args = new Bundle();
		return fragment;

Tadaa, the title is always visible now.


If you have to create some kind of setting or boolean name, try to give it the positive name. In this case it could be clear if the name would be android:showWindowTitle. 

Try to prevent these kind of names:

boolean disableChecks
boolean notShowing
boolean dontSave


A handy package to search through indexed files is mlocate which you will get bundled in a server or workstation installation of a Linux operating system.

The first time you will always get this error:

[root@puppetmaster manifests] locate "site.pp"
locate: can not stat () `/var/lib/mlocate/mlocate.db': No such file or directory

This means that the file mlocate.db is missing because the file index database has never been builded before.

Run this command to build the file index database:

[root@puppetmaster manifests] updatedb

You also can run this command to update the database.

Note that it could take a while to build the database.

Somebody at GNOME decided to make the second monitor to stay fixed when you switch workspaces. To enable the switching  workspaces of the  second monitor, change this gnome setting by running this command. 

gsettings set workspaces-only-on-primary false

Note: With older versions of GNOME you may need to use gconf-editor.

In GNOME 3.8 you can switch workspace by using the Super + Page Up or Super + Page Down. Unfortunately you can do the same with Ctrl + Alt + Arrow Up or Ctrl + Alt + Arrow Down.  The last one conflicts with a neat shortcut in Eclipse to duplicate lines.

Under Settings > Keyboard you will find the hotkey to switch workspaces. The Ctrl + Alt Arrow Up/Down is always bound and is not shown.

Check which keys are bound here:

gsettings get org.gnome.desktop.wm.keybindings switch-to-workspace-up
['<Super>Page_Up', '<Control><Alt>Up']

gsettings get org.gnome.desktop.wm.keybindings switch-to-workspace-down
['<Super>Page_Down', '<Control><Alt>Down']

You can clearly see that both commands have 2 hotkeys assigned.

To remove the Ctrl + Alt variant run this:

gsettings set org.gnome.desktop.wm.keybindings switch-to-workspace-down '["<Super>Page_Down"]'
gsettings set org.gnome.desktop.wm.keybindings switch-to-workspace-up '["<Super>Page_Up"]'

The hotkeys Super + Page Up and Super + Page Down are retained but Ctrl + Alt + Arrow Up and Ctrl + Alt + Arrow Down are removed.

Today I was trying to run a unit test in Android but for some reason it didn't work. It was a Doh! moment again

This was the message I got:

Process finished with exit code 1
Class not found: "xx.xx.xx"Empty test suite.

In Android there are two kinds of unit tests:

  • Local unit tests which will run on your computer inside JVM
  • Instrumented unit tests that only will run on an Android device

These tests have their own package with the same name as your identifier.

You would see this inside your java folder:

nl.orhun.myapp < -- source files
nl.orhun.myapp (androidTest) <-- instrumented test files
nl.orhun.myapp (test) < -- local unit test files

The advantage of local unit testing is that you don't have the overhead to need to run a virtual device. It just simply can run on your computers JVM. Unfortunately you can't make use of Android framework features. So this will only be handy to test your own code which is not Android framework dependent.

Initially I created a local unit test and ran it. After that I moved the class to the androidTest to make it a instrumented test

Because I ran it before, it remembered my last run configuration. The next time I ran the LoginTest, it tried to run is as a LocalTest while the test is inside androidTest package which gave this error.

So, when you move a unit test from a local test to androidTest, check your Run/Debug settings. In my case I could remove the configuration so the next time a new one is generated again.


All your androidTest configurations are under AndroidTests. The local unit tests are under JUnit.

# Remove cached SVN credentials
Published on 10-11-2016

When using linux or OSX your SVN credentials are saved under your user profile. 

This happens when you use a IDE or using svn from commandline.

This message is shown then:

ATTENTION!  Your password for authentication realm:

   <https://domain.tld> Staff only

can only be stored to disk unencrypted!  You are advised to configure
your system so that Subversion can store passwords encrypted, if
possible.  See the documentation for details.

You can avoid future appearances of this warning by setting the value
of the 'store-plaintext-passwords' option to either 'yes' or 'no' in
Store password unencrypted (yes/no)? yes

Like it says, SVN stores your username and password in plain-text inside your user profile.

To remove all stored SVN username and passwords:

rm ~/.subversion/auth/svn.simple/*


The contents of a credential file looks like this:

K 8
V 6
K 8
V 7
K 15
V 41
<https://domain.tld> Staff only
K 8
V 6


Today I wanted to create a AttachmentMenuDialog which will be reusable in an Activity or Fragment. Like the name says, it's a DialogFragment which will be used to add file attachments to your Activity/Fragment.

In the image below you see that I have two Activities who are opening the AttachmentMenuDialog. The only difference is that ActivityOne is opening the fragment directly and ActivityTwo is opening from another fragment.

I struggled with the problem that it worked in one of the both Activities. So when I fix it for ActivityOne, it broke at ActivityTwo and vice versa.


Inside AttachmentMenuDialog I have three buttons:

  • Add a file from the storage (File intent)
  • Take a picture with the camera (Camera intent)
  • Create a PDF document inside PDFActivity

Adding a file from storage and taking a picture was no problems at all, but getting the PDF document as a result from the PDFActivity was not so easy as it should be.

On the PDF button I had this onClickListener:

pdfButton.setOnClickListener(new View.OnClickListener() {
	public void onClick(View view) {
		Intent intent = new Intent(getContext(), PDFActivity.class);
                //REQUEST_CODE_PDF = 1234
		startActivityForResult(intent, REQUEST_CODE_PDF);

This should work like expected you think? Well it's not. Not on this way at least. The PDFActivity is opened, and when I create a PDF, I would like to have the result back inside my Activity. Like the scheme above I would like to have the PDF document inside ActivityOne or ActivityTwo.

The problem is that when you open the activity by using the fragments startActivityForResult, you will lose the requestCode. Your activity regenerates a new one and you will never ever, ever get your result back. After running the debugger  I found out that the requestCode is always changed to the number 65660. Don't ask me why.


You have to start the Activity from your parent activity like this:

getActivity().startActivityForResult(intent, REQUEST_CODE_PDF);

When the PDFActivity is finished, it will execute onActivityResult() from the parent Activity. You will have to delegate it to your fragment if you need it there.

public void onActivityResult(int requestCode, int resultCode, final Intent data) {
	if (attachmentMenuDialog != null) {
		attachmentMenuDialog.onActivityResult(requestCode, resultCode, data);


Well the first problem is solved. My PDF document from PDFActivity is passed to my Activity where I expect it, but like I'm not frustrated enough, I faced a new problem. Why the heck is my Activity(ActivityOne) closing  when I create my PDF inside PDFActivity?

In ActivityOne and in ActivityTwo the PDFActivity returned the PDF document inside the onActivityResult method. For some reason ActivityTwo finished/closed automatically when PDFActivity was finished.

Let's take a look at the PDFActivity where I finish the activity

public static final int RESULT_PDF_OK = 5;

public void onTaskComplete(File pdfFile) {
	Intent intent = new Intent();
	intent.putExtra(INTENT_PDF_FILE, pdfFile);
	setResult(RESULT_OK, intent);

The problem is that I was putting RESULT_OK as status code which is an reserved status for internal usage. Android uses resultcodes untill 1 for internal usage. Like RESULT_OK = -1.  For your own purposes you can safely use custom resultCodes which are bigger than 1.

I think that because I opened the whole Fragment/Activity chain by using getActivity().startActivityForResult() that I passed the context of my Activity to the PDFActivity which unintentionally finishes the Activity because the result is RESULT_OK.

Managing your linux machine from remote is a great thing, but you shouldn't allow root to logging in from SSH. Or at least when it's reachable from outside. There are anonymous groups active who will beat you up when when you allow this. Naah just a joke, I will beat you up personally. Or you could read this article.


Why shouldn't you allow root to logging in from SSH anyway?

Everyone knows that every Linux operating system has a user called 'root' who can do anything in the system. Root is the root. It can even take your dog away for a walk!

Because everyone knows that this user exists, they only need to guess the password to break in to your system by doing a brute force attack. So someone starts some script or bot who will do continuously login attempts with generated passwords. So the first thing you need to do is disallowing root access. Or even better, have a white list with IP addresses where you will allow SSH connections from.  Or if this is not an option, just block IP addresses where many unsuccessful login attems are made.I will write an article about that too but in the meanwhile take a look at DenyHosts.

So we are going to disallowing the root user to logging in from SSH.

Open the sshd_config file:

sudo vim /etc/ssh/sshd_config


Look for this line:

#PermitRootLogin no

After a clean install you would see that this line is usually commented. This means it will use the default value with is be YES. So having this line commented means that you will allow root login. Holy shit bro, look out I might stand behind you with a baseball bat!

Just change this line into:

PermitRootLogin no

Restart sshd to apply the changes:

sudo /etc/init.d/sshd restart

So how are you supposed to login now? You need to have a normal user with administrator rights or adding the user into sudoers by using visudo. Don't edit the /etc/sudoers file directly. Just do it with visudo because it will validate the changes you have done. if you screw this up, then sudo is not working properly.

# Proxmox 4.x enabling IP forwarding
Published on 21-05-2016

If you are upgrading your machine from Debian Wheezy to Jessie, then you will find out that there is no IP forwarding anymore. In my case I was upgrading my Proxmox 3.1 to 4.2 and masqerading with iptables. So having 1 public address for multiple virtual machines.

You have to check your  /etc/network/interfaces and add this line:

post-up echo 1 > /proc/sys/net/ipv4/ip_forward

This enables ip forwarding.

Should look like this:

auto lo
iface lo inet loopback

auto eth0
#real IP adress 
iface eth0 inet static

auto vmbr0
#private sub network
iface vmbr0 inet static
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '' -o eth0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '' -o eth0 -j MASQUERADE



# Windows hostfile is being ignored
Published on 20-05-2016

I had to modify the hosts file on a Windows 10 machine, but the changes were ignored all the time. It took me literally 1 hour to found out why it didn' t work. The client PC was running Windows 7 before and upgraded to Windows 10. He had 2 windows folders.

Usually I navigate to this folder by typing it in the file explorer:


The host file was indeed located here. But the changes did not affect. The problem was that there were multiple Windows directories so this one was unused.

To make sure that you will always get the proper hosts file, go to this location by using the %SystemRoot% environment variable:


For more generic information about the hosts file, visit this Wikipedia article.


It took me 8 hours to solve this problem. I was creating a custom view that extends LinearLayout and having a RecyclerView as child. The problem was that RecyclerView didn't measured correctly so the width and height stayed on 0dp. The RecyclerView was never visible.

While trying some tricks and hacks to measure the RecyclerView, I've found out that this was just an Android bug that's fixed on februari 25, 2016. RecyclerView was ignoring the layout params such as WRAP_CONTENT and MATCH_PARENT.

Upgrade the RecyclerView dependency at least to this version:

compile ''

Also take a look at this article about Android Support Library 23.2.

Happy programming all!

When you are creating your awesome Android app, you'll make use of the awesome Android libraries like listed on this page.
Sometimes you will face the problem that you want to add some behaviour to some component like a View.

I had the following problem: I was using Android material chips library, but I needed to do something when the user lost focus of the EditText. I have managed to get the EditText view instance by using View.findViewById(), but the library already had implemented OnFocusChangeListener. If I set my own OnFocusChangeListener implementation, then I will break the functionality of the library. So I had to find an alternative way to detect the focus/blur event.

You can do it with the following code:

ViewTreeObserver viewTreeObserver = getViewTreeObserver();
viewTreeObserver.addOnGlobalFocusChangeListener(new ViewTreeObserver.OnGlobalFocusChangeListener() {
    public void onGlobalFocusChanged(View oldFocus, View newFocus) {
        //oldFocus could be null
        if (oldFocus == null || !oldFocus.equals(myEditText)) {


Just don't forget to check if the oldFocus is null or not, or else you will get a NPE.

Focus on a EditText and move the cursor end of the last character.

EditText editText = (EditText) findViewById(;



Let Android open a file in an app that can handle the file, like opening a PDF in Adobe Reader. Or opening an image in your photo viewer.

Determine the mimetype by file extension:

public static String getMimeType(String filename) {
    String type = null;
    String extension = MimeTypeMap.getFileExtensionFromUrl(filename);

    if (extension != null) {
        type = MimeTypeMap.getSingleton().getMimeTypeFromExtension(extension);
    return type;


Let Android open the file by finding the right app for it. It throws an unchecked ActivityNotFoundException if there could not be found an app that can handle this file.

public static void openIntent(Context context, File file, String mimeType) {
	String mimetype = mimeType;
	Intent myIntent = new Intent(android.content.Intent.ACTION_VIEW);
	myIntent.setDataAndType(Uri.fromFile(file), mimetype);
	try {
	} catch(ActivityNotFoundException e) {
		Toast.makeText(context, "No app found for this file type.",


Use it like this:

//The file you want to open
File file = new File("/somedir/somefile.png");

//Get the mimetype of the file
String mimeType = getMimeType(file.getName());

//Open the file in a new Activity
openIntent(getContext(), file, mimeType);


# Android vibrate programmatically
Published on 15-02-2016

To vibrate your Android device programatically use the code below. The 1000 stands for a duration of 1000 milliseconds.



Don't forget to add the required permission to your AndroidManifest.xml file:

<uses-permission android:name="android.permission.VIBRATE" />



Creating a SPA  containting multiple pages/views would require the use of a router like ui-router. The only thing about ui-router is that the documentation is kinda messed up. I will explain how to pass parameters between multiple views.

When you switch between states in your Angular app, you will sometimes need to pass some parameters to the state you want to enter. Mostly you will pass some kind of a number that stands for the primary key of the entity you are trying to show.

In the code below you see an state that requires a personId as a param. The url of the state contains a placeholder where the personId param is expected.

If you would go to the URL /person/1337/  or calling ui-sref="person.detail({personId:1337})" or even calling $state.go("person.detail", {personId: 1337}), then the personId can be acquired by calling $stateParams.personId. Keep in mind that you could get a string value.

   .state("person.detail", {
		url: "/person/:personId/",
		templateUrl: "templates/personDetail.html",
		controller: "PersonDetailController"


Let's come to the next part. Sometimes you want to pass some object as param. UI-router allows that, but you need to define the param first in your state.You can't just pass randomly params until you have defined them. These parameters are optional and have their initial value.

   .state("person.detail", {
		url: "/person/:personId/",
		templateUrl: "templates/personDetail.html",
		controller: "PersonDetailController",
                params: {
                    dateOfRequest: new Date(), //inital default value
                    manager: null //Null  as initial value but yet is defined

In the code above I have defined the params dateOfRequest and manager. These parameters must have a initial value. If you really don't need a initial value, then you can set the parameter value on null.

This is the way how we pass the params now:

$state.go("person.detail", {personId: 1337, dateOfRequest: getDate(), manager: getManager()});

Same story if you use ui-sref:

ui-sref="person.detail({personId:1337, dateOfRequest: getDate(), manager: getManager()})"

In the controller of the state you want to switch you can fetch the parameter like this:

.controller("PersonDetailController", function($scope, $stateParams) {


The last way to pass value between controllers, states, views is using the Angular service. A service is only instantiated once and it's reference is shared.

Angular services are:

  • Lazily instantiated – Angular only instantiates a service when an application component depends on it.
  • Singletons – Each component dependent on a service gets a reference to the single instance generated by the service factory.



If you would like to swap the position items in a javascript array use this code:

var items = ["foo", "bar"];

function changeOrder(posA, posB) {
    var itemA = items[posA];
    var itemB = items[posB];

    items[posA] = itemB;
    items[posB] = itemA;


console.log(items.join(",")) // will output "bar,foo"

JSFiddle example

Provide search engines and crawlers useful rich information about your website by specifying Open Graph tags in your meta tags. The Open Graph protocol has been developed by Facebook and it's used by itself.

The common OG tags are:

	<meta property="og:title" content="Blog about software development by G&ouml;khan Orhun." /> 
	<meta property="og:site_name" content=" software development &amp; websites" /> 
	<meta property="og:image" content="" /> 
	<meta property="og:url" content="" /> 
	<meta property="og:description" content="Blog about software development by G&ouml;khan Orhun." />


If  you share this website on Facebook or Whatsapp, then you would see this:



Find more OG tags on


To debug your website for the Open Graph tags, use the Facebook Debug Tool.(you should probably log in first).
Paste your URL in the field and press "Debug". The first time you debug, the scraper shows the cached data. You should press "Fetch new scrape information". This is also the way to clear the cache of the given URL.




# Securing your connection
Published on 22-12-2015

Encryption is getting importanter now because the internet has grown alot. We have internet everywhere. Even in our pockets. There are whole companies who are based to provide service on the internet. It's kinda weird that these companies are selling something you can't hold in your hands.

When you are trying to visit a website, your web browser is establishing an connection with the web server by using the HTTP protocol. This happens by establishing a plain text connection. 

Using an unencrypted, plain text connection is not a big deal for normal websites where there are no personal data stored onto it, like this site. Important services like internet banking, web e-mail clients or a payment service require the use of an encrypted connection.

The dangers of using a unencrypted HTTP connection:

  • All the data between client and server are exposable by using an network sniffer like Wireshark.
  • You don,'t know if the server/site is the real one. This falls under the category Phishing.

An encrypted connection prevents that any interceptor can read the data. If someone would use a network sniffer, the connection will be shown but all the data is encrypted., check if your browser accepts invalid, insecure certificates.


Test your website or servers supported encryption. This site also tests your server for vulnurabelities.


Mozilla has launched a free service to generate your own TLS certificate



# From a barebone to a PowerEdge
Published on 30-07-2015

It all started on my hobby barebone server. My goal was to learn setting up an webserver and to create some hobby websites on it. As a student, I bought an Asus Terminator T2-P deluxe.

The Asus Terminator T2-P deluxe had the following specs:

  • Socket 478 motherboard
  • Intel Celeron D 2.4 Ghz(after a while upgraded to a Pentium 4 3.06 Ghz)
  • 2x 1 GB DDR PC 3200
  • 1x 80 GB WD disk
  • 1x 1TB Samsung disk
  • Running Windows XP with Xampp and Webmin(yeah i know :'))


After my home server got hacked due to crappy security(who the hell allowes remote access to webmin), my server became a member of an botnet. This lasted a few days until my ISP disconnected me and send me a letter. The letter stated that there were illegal activities from my IP and that's why they kicked me off the internet.

It was time to try the real deal: FreeBSD. My first introduction to linux was FreeBSD. When I look back, it was the hardest distro ever, but it was the best way to start learning about UNIX systems. It took me around 4 reinstalls to learn about things I should do better the next time. I was running a Postfix mailserver with an open relay. There were around 40k mails send each day.

The next step into the UNIX systems was running CentOS. Like FreeBSD I was running Apache, PHP, Postfix, MySQL and Bacula on the same operating system. This setup wasn't reliable because as soon as one component crashes, everything crashes.

It became time to go enterprise. In 2010 I bought a Dell PowerEdge R210 with the following specs:

  • Intel Xeon 3430
  • 12GB RAM
  • 2x 500GB 7200Rpm WD RE


Colocated at Leaseweb at datacenter Evoswitch in Haarlem


After a year running CentOS colocated, I was interested in virtualisation. I've read about it and everyone talked about it like it was the answer to the meaning of life. Hell yeah, it was.

The first hypervisor I tried was ESXI. I bought a second hand Dell PowerEdge 1850 with 2x SCSI 72GB disks to try some things with ESXI. I became deaf with the server running in my room. There were also problems with the drivers. Because the PE 1850 was kinda old, I was forced to use an older version of ESXI and wasn't happy with the lack of a web interface. I sold the PE 1850.

Next thing I tried was Proxmox. This was the thing I was looking for: an opensource Debian based bare metal hypervisor. I couldn't install it on my colocated PowerEdge R210 because my mail and websites were running on it, so I made the decision to buy another Dell server.

The new Dell PowerEdge R310 server specs:

  • Intel Xeon 3430
  • 16GB RAM
  • 4x 500GB 7200 RPM WD RE
  • Dell Perc H200A

Well, the server was up and running Proxmox with 4 virtual machines and faced to next problem: the datacenter gave me 1 public IP. I have managed to make all virtual machines reachable behind 1 public IP.


The solution was using an virtual internal network between the virtual machines and Proxmox was configured to bridge incoming connections to the virtual firewall/router. This process is called NAT(Network Address Translation) and Masquerading.

All other virtual machines are behind the firewall/router. If the virtual machine running firewall/router is shut down, then all the underlying virtual machines are not reachable. Sounds secure to me. Every virtual machine I am running, has it's own responsibility. Like I said, running multiple 'servers' on one operating system is not reliable. So I have  virtual machine running only apache, the other vm is running MySql etc.

Why no VPS?

Well, I like to learn about managing my own server so I can do whatever I want. The only advantage you have with VPS is that you don't need to worry about the hardware of your server. If something gets broken, then it isn't your responsibility anymore.

VPS mostly have a limited choice of operating systems. Like I said before, It's not reliable to give one server/operating system many responsibilities.If you want to install a webserver, database and a mailserver on three seperate VPS'you will pay at least around €100,-.




# Getting started with AngularJs
Published on 29-07-2015

Javascript is getting more popular now while it could be a nightmare to code it.  There is an uncontrollable growth in javascript frameworks to make your development progress easier so you can create code in less time. In this article I will explain how to start with AngularJs.


AngularJs is an open source javascript framework created by Google and is maintained by Google and the open community of individual developers. Check the wiki for more info about AngularJs.

The framework is user friendly to develop with and runs client side in the web browser. It also uses the MVC pattern so the presentation layer is separated with the data and logic.


Lets begin!

You have to run a webserver like Apache on your PC. There are easy all-in-one packages (for Windows) to setup apache like XAMPP or WAMP (I don't have to tell you that you must not use these webservers for production right? :)). Or you could use a nodejs webserver.

Create the index.html:

<!DOCTYPE html>
		<title>Getting started with angular</title>
		<script src="/bower_components/angularjs/angular.min.js"></script>
		<script src="/js/app.js"></script>
		<script src="/js/controller.js"></script>
	<body ng-app="angularApp" ng-controller="MainController">


Create the js/app.js:

var angularApp = angular.module("angularApp", [


Create the js/controller.js:

var angularControllers = angular.module("angularControllers", []);

angularControllers.controller("MainController", function() {

Well, this was quite easy huh? If you run this code, you have a single page with a controller. Let's add some pages and navigations.

There are two options for navigations: angular $route, and a commonly used third party library UI-router. If you wish to have a simple page navigation, then the build in angular $route is enough. If you want more advanced navigation with nested views and states, then use UI-router. In this example we will be using UI-router.

You will have to download the UI-router library and include it in index.html. I am using bower package management for my web libraries, even for angular.
Bower has also a dependency management system ex: if your fubar["1.2" ] library requires at least jQuery["1.10"], then fubar 1.2 will be installed with jQuery 1.10. Pretty cool huh? Take another coffee with the time you've saved!

Just take a look at it if you have time. For this example you could skip the bower part and just download UI-router library.

Include the angular-ui-router.min.js in index.html and add the <ui-view> directive inside the <body> tag.

<!DOCTYPE html>
        <title>Getting started with angular</title>
        <script src="/bower_components/angularjs/angular.min.js"></script>
        <script src="/bower_components/angular-ui-router/release/angular-ui-router.min.js"></script>
        <script src="/js/app.js"></script>
        <script src="/js/controller.js"></script>
    <body ng-app="angularApp" ng-controller="MainController">

The <ui-view> directive decides that the content from the page states will be rendered here.

Inside your app.js we need to load the module "ui-router" and add the page states configuration.

var angularApp = angular.module("angularApp", [

angularApp.config(function($stateProvider, $urlRouterProvider) {
	.state("homepageState", {
		url: "/home/",
		templateUrl: "templates/homePage.html",
		controller: "HomePageController"
	.state("myOtherStateName", {
		url: "/secondPage/",
		templateUrl: "templates/secondPage.html",
		controller: "SecondPageController"

This still sounds easy right? Well, it is! We have done the following things:

  • Registered "ui-router" module to "angularApp".
  • Added a config block where the "ui-router" module is configured.
  • Added two states named "homepageState" and "myOtherStateName"
  • Added a fallback state to url: "/home/". You could define a 404 page like "templates/pagenotfound.html".

Every state has his state name(we will come back later to this), an URL to map the browser URL, and a controller. There are more advanced options. Check the ui-router wiki. You could even delay the page navigation. state loading until some requested remote data is loaded, but that's out of the scope for this article.

In this example we will be using a basic page with an controller behind it.


var angularControllers = angular.module("angularControllers", []);

angularControllers.controller("MainController", function($scope) {
	$scope.callTheMain = function() {
		console.log("What's up?!");
angularControllers.controller("HomePageController", function($scope) {
	$scope.title = "This is the homepage";
angularControllers.controller("SecondPageController", function($scope) {
	$scope.title = "The secondpage";

Angular controllers supports inheritance. In index.html there is the <body ng-controller="MainController">. Every controller inside <body> inherits "MainController". In this case if the current state is "homepageState", we have a controller named "HomePageController" who automatically inherits"MainController". 

So basically like this:

<div ng-controller="ParentController">
        <div ng-controller="SomeChildController"></div>

"SomeChildController" can call "ParentController" but "ParentController" cannot access "SomeChildController"


Lets create the state templates


<br />
<div>Go to the <a ui-sref="myOtherStateName">next</a> page.</div>


<br />
<div>Go back to home <a href="#/home/">page</a>.</div>
<br />
<label>Modify the title!</label>
<input type="text" ng-model="title"/>

You should have noticed the two difference in the templates. In homePage.html we use the ui-sref="myOtherStateName" to create a link to secondPage.html. This is the proper way to create links to different states. So if you want to change the URL's to these states, you only have to change the url property in app.js. 

The ui-sref also works on other html tags like <div ui-sref="myOtherStateName">link</div>. 

In secondPage.html we have used the plain <a> which is self explanatory.


Template compilation

Both templates contain {{title}} expression. Angular compiles the template and replaces the {{title}} expression with the value of $scope.title from the controller. This also could be a function call :{{getTitle()}}.

AngularJS provides two-way databinding support. If you write text in the input field in secondPage.html, the value in $scope.title changes synchronously.


So far so good, we have an AngularJS application with page navigation. Lets see what we can do with services and factories

Create js/services.js

var angularServices = angular.module("angularServices", []);

angularServices.service("MyService", function() {
	this.helloWorldMe = function(myName) {
		return "Hello world " + myName;

angularServices.factory("PetFactory", function() {
	return {
		createPet: function(myPetName) {
			return "Oh hello " + myPetName + "!";

The main difference between a service and a factory is explained in this thread on StackOverflow.


  • A service is a singleton. This means that there is only one instance of this service.
  • A factory returns a new object.


Add the services module to your app like you did before:


​var angularApp = angular.module("angularApp", [

angularApp.config(function($stateProvider, $urlRouterProvider) {
	.state("homepageState", {
		url: "/home/",
		controller: "HomeController",
		templateUrl: "templates/homePage.html",
		controller: "HomePageController"
	.state("myOtherStateName", {
		url: "/secondPage/",
		templateUrl: "templates/secondPage.html",
		controller: "SecondPageController"


Add the service "MyService" and factory "PetFactory" into our controllers and execute the functions.


var angularControllers = angular.module("angularControllers", []);

angularControllers.controller("MainController", function($scope) {
	$scope.callTheMain = function() {
		console.log("What's up?!");
angularControllers.controller("HomePageController", function($scope, MyService) {
	$scope.title = "This is the homepage";
angularControllers.controller("SecondPageController", function($scope, PetFactory) {
	$scope.title = "The secondpage";

Now you can reuse your code by using services and factories.
I will post more articles about filters and directives.


Download the complete project with all files here

# Hello World
Published on 15-06-2015


System.out.println("Hello world");

After few years "under construction", my website is finished at last! Just give me some time to write some blog articles. 

For years I have been struggling with people asking me if I have my own website when I tell them that I'm a developer. Well, here it is.