Yesterday I went to Evoswitch to find out why my other server was not reachable anymore. I tried to send a ACPI reboot signal from remote but the server didn't respond to it. I had that moment again that I could smash my head to my desk.
I have the following servers:
Dell R210 from 2010
Dell R310 from 2013
When I installed the R310, I migrated completely from the R210. I also switched the IP address because of my mail server reputation. It's hard to build one up so it would be a waste not to use my old IP. So the R310 is configured as Server001 and the R210 is running as Server 002. I thought I was done but forgot 1 thing....THE POWER CABLES!.
If I send a reboot/shutdown signal from remote to Server 002(R210), the R310 was shutting down! FUCK!
Replacing the R210
I replaced my R210 with a R410 which I acquired from work with the following specs.
2x Intel Xeon E5640
12GB DDR 3 Ram
2x 500 GB
Dell PERC H700
2x Western Digital RE 500GB
I installed Fedora 25 with Docker to try some cool things out with Docker. It's cooler to run it on your own colocated server than in a VirtualBox on your workstation :D.
The nice thing about Fedora is that it runs on the newest packages so I don't have to mess around with 3rd party respo's which were required on CentOS. Running on bleeding edge release can have it's benefits or can even turn out very badly. But you can tell the same about the dependency hell you get by using 3rd party repositories.
The shared rack
Like you see it's a fucking mess. No not my hair, but the servers I'm talking about the servers hanging on 2 screws without rack rails. I mean this server could "crash" literally from the rack down.
You come to see things like Sitecom consumer switches connected to some PowerEdge/HP servers
It's running good now. Let's try the cool stuff now. Running on 16 cores baby!
It all started on my hobby barebone server. My goal was to learn setting up an webserver and to create some hobby websites on it. As a student, I bought an Asus Terminator T2-P deluxe.
The Asus Terminator T2-P deluxe had the following specs:
After my home server got hacked due to crappy security(who the hell allowes remote access to webmin), my server became a member of an botnet. This lasted a few days until my ISP disconnected me and send me a letter. The letter stated that there were illegal activities from my IP and that's why they kicked me off the internet.
It was time to try the real deal: FreeBSD. My first introduction to linux was FreeBSD. When I look back, it was the hardest distro ever, but it was the best way to start learning about UNIX systems. It took me around 4 reinstalls to learn about things I should do better the next time. I was running a Postfix mailserver with an open relay. There were around 40k mails send each day.
The next step into the UNIX systems was running CentOS. Like FreeBSD I was running Apache, PHP, Postfix, MySQL and Bacula on the same operating system. This setup wasn't reliable because as soon as one component crashes, everything crashes.
It became time to go enterprise. In 2010 I bought a Dell PowerEdge R210 with the following specs:
Colocated at Leaseweb at datacenter Evoswitch in Haarlem
After a year running CentOS colocated, I was interested in virtualisation. I've read about it and everyone talked about it like it was the answer to the meaning of life. Hell yeah, it was.
The first hypervisor I tried was ESXI. I bought a second hand Dell PowerEdge 1850 with 2x SCSI 72GB disks to try some things with ESXI. I became deaf with the server running in my room. There were also problems with the drivers. Because the PE 1850 was kinda old, I was forced to use an older version of ESXI and wasn't happy with the lack of a web interface. I sold the PE 1850.
Next thing I tried was Proxmox. This was the thing I was looking for: an opensource Debian based bare metal hypervisor. I couldn't install it on my colocated PowerEdge R210 because my mail and websites were running on it, so I made the decision to buy another Dell server.
The new Dell PowerEdge R310 server specs:
Well, the server was up and running Proxmox with 4 virtual machines and faced to next problem: the datacenter gave me 1 public IP. I have managed to make all virtual machines reachable behind 1 public IP.
The solution was using an virtual internal network between the virtual machines and Proxmox was configured to bridge incoming connections to the virtual firewall/router. This process is called NAT(Network Address Translation) and Masquerading.
All other virtual machines are behind the firewall/router. If the virtual machine running firewall/router is shut down, then all the underlying virtual machines are not reachable. Sounds secure to me. Every virtual machine I am running, has it's own responsibility. Like I said, running multiple 'servers' on one operating system is not reliable. So I have virtual machine running only apache, the other vm is running MySql etc.
Well, I like to learn about managing my own server so I can do whatever I want. The only advantage you have with VPS is that you don't need to worry about the hardware of your server. If something gets broken, then it isn't your responsibility anymore.
VPS mostly have a limited choice of operating systems. Like I said before, It's not reliable to give one server/operating system many responsibilities.If you want to install a webserver, database and a mailserver on three seperate VPS'you will pay at least around €100,-.
After few years "under construction", my website is finished at last! Just give me some time to write some blog articles.
For years I have been struggling with people asking me if I have my own website when I tell them that I'm a developer. Well, here it is.