Frankenstein – A Message for Artificial Intelligence Part 1

Can you imagine a world in which human will be sharing its dominance with some other non-human race? Actually, it is highly probable. Keeping aside the possibility of encountering the Alien life anytime in the future, there is one thing, which possesses the potentiality to outsmart humankind and change the world in a way we human cannot. And the interesting thing is, that smart entity will be our own creation and its deeds will, in a way, be our own deeds. I’m sure you now realize what I’m referring to. Yes, it’s the Artificial Intelligence.

But before talking about AI and its potential implications to the human civilization, let’s read one story. This story was written in 19th century by an author named Mary Shelley. The name of the novel is Frankenstein (aka The Modern Prometheus). You can easily find this whole book online and most likely, in one of your favorite Book apps.

Frankenstein Book Cover
Frankenstein Book Cover

Frankenstein” mainly revolves around the story of an ambitious young scientist, who in the sheer ambition of making a scientific invention, creates a living creature and ultimately gets his life changed forever. This is one of my favorite books because it illustrates the power of our ambition and its consequences upon its poor handling. It’s just as Spider-Man used to say: “With great power, comes the great responsibility“. With that said, let’s quickly go through the story, shall we?

Frankenstein – The Story

This is a series of letters written by a Ship captain mentioning his experiences of the journey to his cousin during his long sea adventure. After some ordinary letters, the adventurer starts to mention about a certain person named Frankenstein, whom he came across at a very distant isolated part of the ocean. When they met, Frankenstein seemed to be in very bad health, however in a very determined pursuit of some strange object. After few weeks of recovery and silence, the writer finally gets to have close conversation with the newcomer. And then, my friend, the real story begins.

Frankenstein was born as a first child to his parents, who used to be very kind and loving people. When he is a kid, they also adopt a little orphan girl and raise her as their own daughter. And they live a very happy and blissful life, till the adopted girl gets very ill. At last, she recovers, but while taking care of her, the mother gets the disease and dies. Devastated by his mother’s death, Frankenstein, who has already completed his schooling by then, decides to go abroad for his further education and leaves his family behind.

Once at college, he meets a certain professor, who inspires him to study the various branches of sciences, mainly Chemistry, Natural Science (Biology), Mathematics, and so on. Being sharp in mind and hardworking in nature, he soon masters these subjects as far as he could be taught there. He even gets praised throughout his school for improving some of the chemical apparatuses and techniques. After two years, when he was finally going to visit his family, a new idea struck his mind. He thinks what if he could create and animate a lifeless body out of various matters.

With that almost impossible goal in aim, he dedicates all of himself and his time to this imaginary pursuit. He starts spending his days and nights by studying and experimenting various concepts of Chemistry and Biology. Then, one day, he finally perceives it. It all comes to him as a light and clears everything in his vision. He feels immeasurable happiness for his discovery, but doesn’t or couldn’t share it with another living being. So, he just decides to use this newly gained knowledge and show his creation to the world after its completion.

Then, he spends next long consecutive period in this singular and monotonous pursuit. He isolates himself from the world in his private laboratory and works day-and-night without caring for food, rest, or entertainment. Being fatigue of mental stress and physical labors, his whole physique gets weaker and paler day after day. He undergoes such turmoils of frustration upon his repeated failures that he sometimes, even curses himself and his teachers. But no matter what, he never gives up and keeps trying. Despite all these pain and struggles, he puts his whole heart and soul into it. He builds every part of this creation with utmost care and devotion.

Frankenstein Image

And one dreary night, when Frankenstein is working upon his creation filled with tiredness and stress, his creation finally opens up its eyes. Upon noticing animations in the being he so desperately worked for so long, instead of being happy, he gets overwhelmed by the feeling of horror and hatred towards his own creation. Although he worked so hard to create a perfect being, his creation gets such ugly and dreadful appearance that he himself couldn’t bear it. And with panic and fear, he runs away leaving behind the newly born creature. But when he returns the next day, the creature is already gone.

After this incident, bad events begin to happen in Frankenstein’s life. First, his little brother is murdered and upon its charge, his cousin-sister is sentenced to death. Frankenstein, being convinced that it was that hideous creature’s deeds, starts looking for it. When he finally finds it, the creature confesses that it had killed his brother and framed his cousin for punishment. Then, it also tells him his story, which occurred after being created by Frankenstein and how it came to turn against him.

Since this post is going to be a long one, I’m going to divide it into two. Next, I’ll cover the rest of the story, mainly from the perspective of the creature. And I’ll also explain why I’m linking this story to Artificial Intelligence. Until then, I’d like to thank you for reading it and request you to go through the next one as well. See you in the next post! 🙂

Notice: Migration Of This Blog to New Site

First of all, I’d like to thank you for caring about what I write here. I hope the content available through this site has been useful for you in one way or the other. Because of the increasing visitors and size of the content published in this blogging site, I’ve decided to migrate it to a better hosting service. Accordingly, I’ve setup my blog on Amazon Web Services and the updated site address is

Thank you
Thank you

With that said, I request you to expect a redirection to a new site when trying to browse my old site. I assure you that the contents are consistent over both old and new sites, so you needn’t worry about not having access to my old articles. In addition, this new site is supposed to have more performance, high availability and security as well.

Once again, I would like to thank you for being a part of this journey. I’m truly looking forward to your participation in the future articles.

Docker: Running Apache Web Server In A Container

This is my second post for this blog series on Docker. If you haven’t already read my previous post, I highly recommend you to read that article first. Here, I’m going to dive a little deeper into container management by working on a further complicated application and advanced features of docker.

Until now, I’ve already covered the introduction, basic container usage and default networking in docker. So, let’s now get into more advanced concepts in container virtualization. For this post, my goal is to build and run a container serving web server. And as usual, there are going to be some challenges to achieve this goal, which we’ll tackle during this article. With that said, let’s get into action.

Docker Architecture
Docker Architecture

 Building a Docker Image

I’m using a CentOS’s latest image as before. If you’re planning to test this in some other platforms, the procedures might vary a little. Similar to previous post, let’s first create a Dockerfile to build an image with the required packages and configuration. My initial Dockerfile looks something like this:

sajjan@learner:~$ mkdir -p Dockerfiles/httpd
sajjan@learner:~$ vi Dockerfiles/httpd/Dockerfile
FROM centos
MAINTAINER sajjanbh <>
RUN yum -y --setopt=tsflags=nodocs install httpd
RUN yum clean all
CMD ["/usr/sbin/apachectl", "-DFOREGROUND"]

Then, I tried to build an image using this file. Note, I’m using Ubuntu as my host operating system. Here are the results I obtained:

sajjan@learner:~$ docker build -t sajjanbh/centhttpd:v1 Dockerfiles/httpd/
error: unpacking of archive failed on file /usr/sbin/suexec;589aec9b: cpio: cap_set_file
Error unpacking rpm package httpd-2.4.6-45.el7.centos.x86_64

Upon failing to build an image, I researched about the error that I received. Actually, it’s a well-known issue with AUFS. It occurs when trying to install high-privilege requiring packages (e.g. httpd) in CentOS like containers (e.g. Fedora, RHEL, Oracle, etc.) within non-CentOS like hosts (e.g. Ubuntu). There are some kernel patches for this issue. However, I couldn’t find a solid method to do it successfully. So, I used a work-around solution. First, I installed a CentOS machine, installed docker engine in it and then, built my httpd image there. After building it, I saved it as a tar file, copied it to my main Ubuntu system, and loaded it. And it worked without encountering above issue.

# These are on my new CnetOS host
[root@centos /]# yum install docker-io
[root@centos /]# docker pull centos
[root@centos /]# mkdir -p docker/httpd
# Copying Dockerfile from Ubuntu host to newly created folder; Note: <ubuntu-host> is Ubuntu's IP or hostname
[root@centos /]# scp sajjan@<ubuntu-host>:/home/sajjan/Dockerfiles/httpd/Dockerfile docker/httpd
[root@centos /]# docker build -t sajjanbh/centhttpd:v1 docker/httpd
# build image completed successfully; saving this image to tar file
[root@centos /]# docker save -o centhttpd.tar sajjanbh/centhttpd:v1
[root@centos /]# scp centhttpd.tar sajjan@<ubuntu-host>:/home/sajjan/
# This is in Ubuntu host. Loading copied tar file as docker image
sajjan@learner:~$ docker load -i centhttpd.tar

Encouraged by this small success, I continued my setup, just to face another immediately. Upon trying to run my image, it didn’t work as expected. Although I started it as a daemon, it exited prematurely instead of serving a website. Upon referring to working Dockerfiles online, httpd tends to confuse itself when closed incompletely during container restart. So, we need to specify remove statements in our Dockerfile to delete existing httpd data. Or better, we can create a Bash script defining the configuration and required actions, and then call it in the Dockerfile. Doing this is very useful when we need to deploy advanced level containers. Accordingly, my new Dockerfile and its corresponding script look like these:

[root@centos /]# vi docker/httpd/
# Remove any existing httpd data 
rm -rf /run/httpd/* /tmp/httpd*
# Start Apache server in foreground
exec /usr/sbin/apachectl -DFOREGROUND

[root@centos /]# vi Dockerfiles/httpd/Dockerfile
# Make this image from CentOS base image
FROM centos
MAINTAINER sajjanbh <>
# Install httpd and clean all
RUN yum -y --setopt=tsflags=nodocs install httpd && \
    yum clean all
# Open port 80 for the container
# Add and define above script
# Make the script executable inside container
RUN chmod -v +x /
# Execute the script on running the container
CMD ["/"]

[root@centos /]# docker build -t sajjanbh/centhttpd:v2 docker/httpd/

Note: I built this image in CentOS host as mentioned above. After building this image and copying it to my Ubuntu system, I loaded and ran it. Here, I’m running this container in Daemon mode and I mapped host’s port 8080 to container’s 80 port. That means, I can access the web site served by this container via port 8080 of host machine.

sajjan@learner:~$ docker run -d --name web -p 8080:80 sajjanbh/centhttpd:v2

Upon running this HTTPD container, I am able to access its page as shown in the below screenshot:

Docker Container - Test Web Page
Docker Container – Test Web Page

Let’s also quickly mount our website’s source code to this container so that it’ll serve our web page instead of Apache’s test page. To do that, let’s stop the container. You may remove it altogether from the process list also. Then, let’s re-run the httpd image in a way that it’ll serve the page defined by us.

# Create a folder to store my web pages
sajjan@learner:~$ mkdir test-web && cd test-web
# Put your web page here
sajjan@learner:~$ vi index.html
# Run the Apache container with source code volume mapped to it
sajjan@learner:~$ docker run -d --name myweb -p 80:80 -v /home/sajjan/test-web/:/var/www/html/ sajjanbh/centhttpd:v2
My Test Web Page
My Test Web Page

Now that I’ve got my web server running in default mode, I would like to configure it and make it serve my own web application, right? However, there is a hurdle in doing that. Traditionally, docker containers run only one process or service, unlike physical or virtual machines which can run any number of services. So, when I ran my above container, the Apache server got started in foreground. That means if I try to attach to its TTY or console, I attach to that httpd process instead of the container’s shell. Thus, I cannot manage my container as I’d have done with traditional servers.

Well, at this point, it’s common to think that containers aren’t that beneficial and useful as they’re said to be. Once I thought it too. However, the problem doesn’t exist with the container virtualization or Docker. Rather, the problem is in our perception and inability to embrace change. We tend to view and use technologies the way we always have been doing. Therefore, until we unlearn what we already know and try to learn new technologies from new perspective, this problem cannot be solved.

Back to the topic, Docker by default allows only one foreground service to run inside a container, and I believe they’ve good reason for it as well. But there are methods like Supervisor and Runit to run multiple services in the container. About running single service or multiple services in a container, it all depends on our requirements and preference. Based on this information, I’d like to have sshd running in parallel with the httpd service in my container. This way, I can login to my container and perform my preferred configurations and administration tasks. Note: it’s not necessary that we must always have SSH access to the containers. In fact, I’d like to have my container as light as possible. So, once I’ve got my container fully configured, I won’t be using supervisor or sshd along with my web server.

# Defining a Dockerfile
[root@centos /]# vi docker/supervisor/Dockerfile
# Build an image from CentOS base image
FROM centos
MAINTAINER sajjanbh <>

# Install SSH, Apache packages
RUN yum -y --setopt=tsflags=nodocs install openssh-server httpd python-setuptools && yum clean all

# Install Supervisor. Note: CentOS 7 base image repo doesn't have supervisor
RUN easy_install supervisor

# Backup original sshd_config
RUN cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig
RUN chmod a-w /etc/ssh/sshd_config.orig
# Make directories to store services' data
RUN mkdir /var/run/sshd /var/log/supervisor

# Set root's password
RUN echo 'root:najjas123' | chpasswd

# Permit SSH login for root user
RUN sed -i 's/#PermitRootLogin/PermitRootLogin/' /etc/ssh/sshd_config

# Generate keys for SSH setup
RUN /usr/bin/ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C '' -N ''
RUN /usr/bin/ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C '' -N ''

# Add supervisor's configuration file to the container. Note: This config file is referred by supervisord once started inside container
COPY supervisord.conf /usr/etc/supervisord.conf

# Allow ports 22 and 80 for container
EXPOSE 22 80

# Execute supervisord as entrypoint into the container
CMD ["/usr/bin/supervisord"]

# let's also create a supervisord.conf file
[root@centos /]# vi docker/supervisor/supervisord.conf

command=/usr/sbin/sshd -D

command=/bin/bash -c "rm -rf /run/httpd/* /tmp/httpd* && exec /usr/sbin/apachectl -DFOREGROUND"

Then, let’s build an image out of it. Note that I’m building and saving this image in a CentOS host machine.

# Building an image using above Dockerfile and config file
[root@centos /]# docker build -t sajjanbh/centsupervisor:v1 docker/supervisor/
# Export the docker image as a tar ball
[root@centos /]# docker save -o centsupervisor.tar sajjanbh/centsupervisor:v1

Next, let’s fetch that tar ball file to our Ubuntu docker host and start using it as follows:

# Download this tar ball to Ubuntu host
sajjan@learner:~$ scp root@docker2:/root/centsupervisor.tar .
# Import the docker image from tar ball
sajjan@learner:~$ docker load -i centsupervisor.tar

# Verify the image in docker local image repository
sajjan@learner:~$ docker images | grep sajjanbh/centsupervisor
sajjanbh/centsupervisor   v1                  ccf548acd958        39 minutes ago      250 MB

# Run this supervised container
sajjan@learner:~$ docker run -d --name mysupweb -p 2222:22 -p 80:80 -v /home/sajjan/test-web/:/var/www/html/ sajjanbh/centsupervisor:v1

# Verify SSH to container
sajjan@learner:~$ ssh root@localhost -p 2222
The authenticity of host '[localhost]:2222 ([]:2222)' can't be established.
RSA key fingerprint is SHA256:z9vUQ0SODzZlVFDKmwH9TTCbouAkoSbzy8FQ8iXBtRY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:2222' (RSA) to the list of known hosts.
root@localhost's password:

# Let's also check the Apache's access log to verify its functioning
[root@2aff9cd532ae ~]# tailf /var/log/httpd/ - - [12/Feb/2017:09:36:57 +0000] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:51.0) Gecko/20100101 Firefox/51.0" - - [12/Feb/2017:09:36:57 +0000] "GET /style.css HTTP/1.1" 304 - "http://localhost/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:51.0) Gecko/20100101 Firefox/51.0"

Upon successfully running this container, I saw the same web page as before. The main difference in this container in comparison to the above one is that it runs its services (SSH and Apache) in supervised mode. Here’s a video tutorial implementing the whole setup procedure:

Well, this is it for now. Today, I’ve covered about running services like Apache and SSH inside the container in both standalone and supervised modes. I hope this has been informative and useful for you. Please let me know of your opinion in the Comments sections below. And as always, thanks for reading!

Docker: How To Get Started With Containers in Ubuntu

Are you a Software developer, or a DevOps engineer, or an IT student, or simply a tech-enthusiast? If yes, my guess is that you already know what containers are and how the Docker project is making it better. Unlike virtual machines, which run on their own kernel modules regardless of their hypervisor‘s kernel; containers utilize the single common kernel to run the multiple instances of operating systems. And the upside to using containers are definitely low memory and CPU consumption, which means more applications and less resources. In addition, it also provides higher agility in developing, testing and implementing softwares. This ultimately reduces total CAPEX and OPEX costs to run a cloud or data center.

Docker Logo
Docker Logo

What is Docker?

According to Docker Inc., docker is a container-based virtualization technology, which is lightweight, open and secure by default. It runs a docker engine on top of host operating system and allows software binaries and libraries to run on top of it. These containers wrap a software package in a complete filesystem including everything necessary to execute: code, runtime, system tools, system binaries, and so on. This ensures as-is and easy transportation and deployment of the application.

Docker Installation

For this post, I’m going to use Ubuntu 16.10 as my host operating system. You can find corresponding installation guides for other platforms in this documentation. Here are the commands I entered:

# Adding a docker repo with codename xenial (16.04). It also works for Yakkety Yak (16.10)
sajjan@learner:~$ sudo vi /etc/apt/sources.list.d/docker.list
deb ubuntu-xenial main
# Remove any existing docker
sajjan@learner:~$ sudo apt-get purge lxc-docker
# Prioritize docker-engine in APT cache
sajjan@learner:~$ sudo apt-cache policy docker-engine
# Install docker-engine
sajjan@learner:~$ sudo apt-get install docker-engine

These are the installation steps to make sure we’re setting up the latest version from the docker repository. If you rather prefer to follow an easier process, simply enter the command “sudo apt-get install docker-engine“. This will install the docker version that is currently available in your Ubuntu’s repository. After installation, let’s start the docker service and begin using it.

# Start docker engine
sajjan@learner:~$ sudo systemctl start docker
# Get docker's installed version
sajjan@learner:~$ sudo docker version

Next, let’s also configure our system so that we don’t need to enter “sudo” everytime we use docker. This is an optional step, but really useful one because we’ll be using “docker” command quite often and it won’t be pleasing to type “sudo” command each time. It’s done by adding our user into docker group as follows:

# Add a group called docker. It's generally gets added during docker installation
sajjan@learner:~$ sudo groupadd docker
# Modify current user to be member of docker group
sajjan@learner:~$ sudo usermod -aG docker $USER

To enforce this change, we must have to logout and re-login into the system. After re-login, we can run docker without using “sudo” before it.

Getting Started With Docker Containers

Then, let’s perform some basic container actions:

# Running simple hello-world container
sajjan@learner:~$ docker run hello-world
# Search for available images for CentOS container
sajjan@learner:~$ docker search centos
# Pull CentOS image. Since no repo or tag is provided, it'll pull latest image from default repo
sajjan@learner:~$ docker pull centos
# List locally available container images
sajjan@learner:~$ docker images

Now that we’ve an image for CentOS, let’s go ahead and run it. To get the interactive terminal access to this container, let’s run it as follows:

sajjan@learner:~$ docker run --name mycent -it centos
[root@96ee7cce09b7 /]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
# Enter Ctrl+P+Q to switch to host's shell

Since this is just a core version of CentOS, we won’t be able to perform our usual system commands like ifconfig, ip, ssh, and so on. In order to get them to work, we’ve to install their packages inside this container and then commit it to generate a new image from it. Or, we can also build our custom container using Dockerfile.

Now, let’s install the packages using Yum. If you’re wondering how does networking inside the container work, docker engine by default creates a bridge interface and assigns the virtual interface an IP range of We can verify it by view interface details in the host system and trying to ping each other. I’ll be exploring networking for containers in detail in my future posts, but now let’s just understand that docker provides a bridged networking by default and the containers can directly access external network through this bridge network.

sajjan@learner:~$ ifconfig | more
sajjan@learner:~$ docker network inspect bridge
# Ping container
sajjan@learner:~$ ping
# Attach to docker container named "mycent"
sajjan@learner:~$ docker attach mycent
# Ping host
[root@96ee7cce09b7 /]# ping
[root@96ee7cce09b7 /]# yum install -y iproute openssh-server

Now, I’ve modified the initial image and would like to build custom image out of it so that I can use it later. We can do this by committing this image:

sajjan@learner:~$ docker stop mycent
# Commit the container to build custom image
sajjan@learner:~$ docker commit -m "sshd + ip" -a "SSH N IP" `docker ps -l -q` sajjanbh/cen
sajjan@learner:~$ docker images

Working with Docker Files

Lastly for this post, let’s explore the basics of DockerFile. DockerFile is similar to the installation scripts we use to provision and deploy operating systems or servers. It’s a batch job of various types of statements to perform on container. Here’s my example DockerFile looks like:

sajjan@learner:~$ mkdir -p dockerfiles/centos
sajjan@learner:~$ vi dockerfiles/centos/Dockerfile
FROM centos
MAINTAINER sajjan <>
# Install ssh server and ip commands
RUN yum install -y openssh-server iproute
# Backup original sshd config file
RUN cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig
# Deduct write permission to original sshd config file
RUN chmod a-w /etc/ssh/sshd_config.orig
RUN mkdir /var/run/sshd
# Set login password for user root in container
RUN echo 'root:najjas123' | chpasswd
# Allow root login through SSH
RUN sed -i 's/#PermitRootLogin/PermitRootLogin/' /etc/ssh/sshd_config
# Generate keys for SSH. Without these, SSH daemon won't start
RUN /usr/bin/ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C '' -N ''
RUN /usr/bin/ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C '' -N ''
# Start SSHD daemon in container, so that we can login into it with SSH
CMD ["/usr/sbin/sshd", "-D"]
# Expose SSH port to the container 

# Build a custom image of CentOS using DockerFile
sajjan@learner:~$ docker build -t sajjanbh/centsshd:v1 Dockerfiles/centos/
# List docker images, it'll contain our newly built image
sajjan@learner:~$ docker images
# Start the new container
sajjan@learner:~$ docker run -d --name centsshd -P sajjanbh/centsshd:v1
# List docker processes and get the mapped port for container eg. 32771
sajjan@learner:~$ docker ps
# SSH into container
sajjan@learner:~$ ssh root@localhost -p 32771
[root@96ee7cce09b7 /]# 

Here’s the whole tutorial video in action:

Well, this is it! I believe this article has fulfilled its purpose in illustrating the introduction and getting started guide for Docker. I hope this has been informative to you. Please don’t hesitate to like, comment, share and subscribe this blog. And as always, thanks for reading!

CubLinux – Yet Another Linux Distribution

Welcome to my yet another post! If you’re a fan of Linux, then this article is definitely for you. Here, I’m going to write a little review of a new Linux distro that I’ve been using lately. I don’t know about you, but I hadn’t heard about a Linux distro called CubLinux before. It seems CubLinux, as suggested by the name itself (Chrome + Ubuntu + Linux), is a combination of Chrome OS and Ubuntu. That means, it brings the features and benefits of both Chrome OS and Ubuntu. Ain’t that cool!

CubLinux Logo
CubLinux Logo

CubLinux Installation

You can read a whole wiki on CubLinux to know about its history and other details, so I’m not going to bore you by explaining them again. Now, let’s explore CubLinux based on my user experience for couple of months. Firstly, if you’re currently using Windows and thinking of trying a new Linux OS, it is probably one of the best options. I understand almost all Linux distros support dual boot along with Windows or any other OS. However, I’m recommending CubLinux because it makes the whole thing very easy and simple, even if you’re a complete newbie.

Well, CubLinux like most other Linux distros come with Live boot option, in which you can run the OS without having to install it in your PC. This you can simply achieve by making a bootable USB of CubLinux ISO using tools like Rufus, Yumi, and so on. If you’re looking for a dual-boot setup along with Windows, you may have to create a new partition by shrinking one of your drives in Windows via Disk Management tool. Then, you can either install CubLinux from the boot or first try it and then install it, whichever method you prefer.

Most of the installation procedure is as straightforward as installing any other software. The only section that is confusing to most of the beginners is partitioning. And if you’re a beginner and want dual-boot setup, there’s a good chance that you’ll mess up while installing most of the Linux distros. But it simplifies this step as well for you. It automatically detects the Windows boot, maybe MBR, and asks you if you want a dual boot setup. This is similar to any other Ubuntu distros, however, I found it even easier in CubLinux. Then, you can simply choose the newly created partition and select automatic partitioning. Note; if you know what you’re doing, you may also attempt custom partitioning. Then, your installation will be completed and the next time you reboot your PC, you’ll see a boot screen (GRUB) prompting you to choose the operating system you want to run.

CubLinux Exploration

Continue reading “CubLinux – Yet Another Linux Distribution”

PHP7 – What’s New and Better in PHP?

In some of my previous posts, I discussed about using PHP as part of LAMP for web application development. Well, if you haven’t noticed it already, when I said PHP, I was talking about PHP version 5 or 5.x. However, now I’m going to specifically talk about PHP7 and what new features it brings to the table.

PHP7 Logo
PHP7 Logo

PHP History

PHP5, which is the most popular and most widely used PHP version, was introduced in 2004 and immediately after its launch, development of PHP6 was started. The main goal of proposed PHP6 was to add native support for Unicode characters so that it could allow multiple languages. However, despite years of development effort, native Unicode support couldn’t be achieved and the release of PHP6 got postponed. There had been other developments in PHP other than Unicode support, which were then released in minor releases of PHP5 as 5.3 and 5.4.

After 11 years of the launch of PHP5, the community finally decided to publish its first major release. Since one of the main goals of PHP6 (native Unicode support) wasn’t completed and thus abandoned in 2010, most of the members voted to skip the release of PHP6 to avoid confusion and to directly launch PHP7 (December 3, 2015). As a matter of fact, PHP7 also doesn’t provide native support for unicode, but supporting unicode wasn’t the top priority of this version from the start, so it got launched as the first major release of PHP after 11 years of development.

PHP7 Advantages

Continue reading “PHP7 – What’s New and Better in PHP?”

GIT – A Simple Getting Started Guide For Git

There is a term called “Git” in IT field, which may not be much familiar to most people despite being one of the most useful and handy tools in the industry. So, in this post, I’m going to cover the introduction and getting started guide for Git so that more people will be able to understand it and leverage its benefits.

Git Logo
Git logo

What is Git?

Basically, Git is a distributed version control system. So, what is version control system and why should we care about it? Well, have you ever been in a project, where many people collaborate together from various sections and everyone keeps changing/growing their work? In this scenario, each member may be well-aware of the work done by himself/herself, but what about the work done by other team members? In large or long projects, even keeping track of one’s own work is pretty trivial task. Although it feels like we can remember what we’re currently doing, in reality, our memory keeps fading away with the time. There might be some specific circumstances which led us to do something the particular way, but we’ll more likely forget what we were thinking then, when we’ll look at our work sometimes later. This is the problem domain that Git correctly addresses and solves for us.

Especially in the software or application development sector, Git is a must have tool. It’ll truly be a very hectic thing to develop, manage and maintain a software without the help of some version control system like Git. It records changes to the program codes over time and maintain separate versions for each major changes. This way, you don’t need to worry about remembering what changes you made during various stages of development and why you made those changes.

Installing Git

Continue reading “GIT – A Simple Getting Started Guide For Git”

LAMP-Securing Your Web Server With SSL

Hello and welcome! In my previous post on LAMP web server, I discussed about its installation and configuration inside CentOS environment. Today, I’m writing on how to setup SSL certificate in our website and other security related configurations as well.

Installing SSL Certificate

First of all, let’s install mod_ssl package from MOD Security. It’s done in CentOS as follows:

[root@web ~]# yum -y install mod_ssl

Then, let’s create a directory where we’ll keep our SSL certificates.

[root@web ~]# mkdir /etc/httpd/ssl

Now, let’s generate self-signed certificate for our web server.

[root@web ~]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/httpd/ssl/my-certificate.key -out /etc/httpd/ssl/my-certificate.crt

Here, openssl is the tool or utility that generates SSL certificate. x509 is an important standard for a Public Key Infrastructure (PKI). Similarly, 365 represents number of days that this certificate is valid for. RSA:2048 means the length of encryption key used by this certificate. You can choose it to be 1024, 2048 and 4096 depending on your preference for security and performance. Lastly, my-certificate.key is a file containing encryption key and my-certificate.crt is a file containing self-signed certificate. You can name these files accordingly.

Now that our self-signed SSL certificate is ready for use, let’s implement it. To do so, let’s modify the corresponding conf file so that it contains the correct DocumentRoot and ServerName statements within <VirtualHost _default_:443>. And also make sure that the above generated key and certificate files are referred in this conf file.

vi /etc/httpd/conf.d/ssl.conf
<VirtualHost _default_:443>
DocumentRoot "/var/www/html/test"
SSLCertificateFile /etc/httpd/ssl/my-certificate.crt
SSLCertificateKeyFile /etc/httpd/ssl/my-certificate.key

Now, our apache daemon needs to be restarted in order to use this SSL method. But before I do that, I need to allow this new HTTPS service or port 443 through my firewall. Since I’m using Firewalld in CentOS 7, I do following. You may need to configure your respective firewall accordingly.

[root@web ~]# firewall-cmd --permanent --zone=public --add-service=https
[root@web ~]# systemctl restart httpd

After restart is completed, we can verify the use of SSL encryption by browsing our website with https. Since this is a self-signed certificate, most of the web browsers will display a warning message. However, we can skip this warning and keep using HTTPS. This option is useful if our web applications are used only internally or inside private network. But there may be cases when we may need to host our websites publicly. In that case, we should use Commercial or Third-party signed certificates. Presently, there is an option of using open-source Certificate Authority like Let’s Encrypt, which can be used freely and openly. It also comes with an automation tool called Certbot that allows us to easily install and deploy SSL Certificates in our web servers.


After deploying SSL certificate in server, we would like to have all our web traffic to be encrypted and secured. However, there may be users who aren’t aware about security and may continue using plain HTTP connection. In that case, we need to enforce mandatory SSL encryption for all users by redirecting any HTTP traffic to HTTPS. This can be done by adding following lines inside the httpd.conf file.

[root@web ~]# vi /etc/httpd/conf/httpd.conf
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Turn-Off Displaying Server Signature

Since attackers can target their attacks based on the server version and signature, it’s a best practice to turn off the displaying of server’s signature to users. In Apache, it can be done by appending following lines in httpd.conf file.

[root@web ~]# vi /etc/httpd/conf/httpd.conf
ServerSignature Off
ServerTokens Prod

Suppress PHP Errors

Although it is essential to display error/warning messages generated by web application during development phase, it can very dangerous to display them in production environment. These outputs can leak sensitive information about the application and database, which can ultimately help attackers in compromising your application and in worst case, your business as well. Therefore, it is very crucial to suppress these error or warning messages from being displayed to users in production environment. This can be done from application level by writing code to handle errors. And it can suppressed globally by setting error_reporting statement to Off inside php.ini file.

There is a huge array of security threats present in web applications/servers, all of which couldn’t be covered in this post. However, I’ve attempted to shed some light upon few of them here and hope to cover more in future posts. I hope this has been useful. Please let me know of your suggestion or query in Comments section below. Thank you!

Blocking Mails Based on Subject in Zimbra

Welcome back! In this post, let’s talk about tightening our Antispam or Spamassassin in Zimbra. In real-time, there may be a serious reason for filtering and blocking mails based on subject that contain certain structures or words. Here are the steps to achieve this objective in Zimbra 8.5 and later.

Zimbra Content Filter
Zimbra Content Filter

1) Create a policy file inside /opt/zimbra/data/spamassassin/rules/ as root user e.g.

vi /opt/zimbra/data/spamassassin/rules/
header  SUB_ATTACHMENT  Subject =~ /(.jpg|.png|.gif|.pdf|.doc|.docx|.xl|.ppt)/i
describe SUB_ATTACHMENT Subject contains Attachment Name.
score   SUB_ATTACHMENT  20.0

Here, SUB_ATTACHMENT is a rule that filters the mail header based on the regular expression match of Subject. The describe statement is the message that is sent to the sender mentioning the reason for mail delivery failure. Finally, the score statement sets the spam score of the mail that matches this rule. Here, the spam score of 20.0 is much higher than the maximum allowed spam score. So, the mails that match this rule will be discarded by Zimbra categorizing it as a Spam.

2) Change ownership of this file to zimbra user

chown zimbra:zimbra /opt/zimbra/data/spamassassin/rules/

3) Restart Amavis to implement changes

su - zimbra -c "zmamavisdctl restart"

Now, if there arrives any mail with subject containing .jpg or .png or .gif or .pdf or .doc or .docx or .ppt, it’ll be scored as 20.0 and then discarded. Note, this will send bounce notification back to the sender.

To further strengthen your Anti-Spam in Zimbra, you may also add a custom rule sets maintained by a security researcher called Kevin McGrail. To implement it, follow these steps:

cd /opt/zimbra/data/spamassassin/localrules/
wget -N
zmamavisdctl restart

In this way, we can implement subject based mail filtering in Zimbra server. I hope this post has been informative and useful. Please let me know of your suggestions or queries in the Comment Section below. Thank you!

Enable Authentication In Zimbra CBPolicyd Webui

Hello there! In this post, I’m going to write about how to enable authentication in Zimbra CBPolicyd Webui. If you use Zimbra but haven’t yet used CBPolicyd, I recommend you to see me previous post where I discussed about installation and use of CBPolicyd for managing mail related rules or restrictions in Zimbra. Until now, we don’t have any kind of authentication mechanism to protect our CBPolicyd’s web panel. So, let’s go ahead and setup an authentication method for our crucial CBPolicyd webui.

CBPolicyd Authentication Prompt
CBPolicyd Authentication Prompt

First, let’s create a .htaccess file in CBPolicyd’s home directory.

vi /opt/zimbra/cbpolicyd/share/webui/.htaccess

Inside this file, add following lines:

AuthUserFile /opt/zimbra/cbpolicyd/share/webui/.htpasswd
AuthGroupFile /dev/null
AuthName "User and Password"
AuthType Basic

require valid-user

As we can see in the first line of .htaccess file, we now need to create a .htpasswd file which will be used as authentication user file. This .htpasswd file will contain the username and password for authenticating into CBPolicyd web. So, let’s create this file as follows:

touch /opt/zimbra/cbpolicyd/share/webui/.htpasswd
/opt/zimbra/httpd/bin/htpasswd -cb /opt/zimbra/cbpolicyd/share/webui/.htpasswd <user> <password> 

Then, we need to add a directory entry for CBPolicyd in httpd.conf. The following lines should be appended to the httpd.conf file.

vi /opt/zimbra/conf/httpd.conf

Alias /webui /opt/zimbra/cbpolicyd/share/webui/
<Directory /opt/zimbra/cbpolicyd/share/webui/>
# Below are the access rules for this directory. If you dont' want to have any restrictions on this site, either delete or comment out the following lines.
AllowOverride AuthConfig
Order Deny,Allow
Allow from all

At last, let’s restart the Apache server to implement the changes made:

su - zimbra -c "zmapachectl restart"

I hope this has been helpful. Please let me know if you’ve any feedback or suggestion in the Comments Section below.