Http 301

Interviews are sometimes very interesting as they can point out where you are missing some key information in what you know. In an interview last week I was asked how I would redirect a browser to a new URL.

I immediately suggested using a page with a redirect imbedded using JavaScript – I’ve done this a few times :). The interviewer then gave a hint of status codes…..

I know all about HTTP status codes (2?? = success, 3?? = redirect, 4?? client error, 5?? server side error). And I have used a number of the 200, 400 and 500 errors. So I knew that code 300s was for redirect but I had never actually used them to do so.

I spent some time over the weekend just working out how to do a redirect using 301 – Permanently moved – just because I had never done it before….

So here is how to do it in PHP

http_response_code(301);
header(“HTTP/1.1 301 Moved Permanently”);
header(“Location: “.$newurl);

Really Simple Svelte Routing

Routing is a key feature on any web page, routing is used to display content to the user based on selections the user makes, for example when selecting a menu option. There are many routing components available but sometimes a very simple routing option is needed and the routing components with all their features may be overkill.

This tutorial will show you a very simple way to add routing into a Svelte single page app. The functionality for the routing is all in one file. In this tutorial we will place it in the main page, but it could easily be extracted out of the main page into it’s own component.

Create Project

To start with lets create a simple Svelte project using one of the base templates. We will be changing all the content on the page but this is the easiest way to get a Svelte project started and running. 

Create Svelte project from a simple Svelte template

npx degit sveltejs/template svelte-spa-router

Install dependencies

npm i

Start the application

npm run dev

Page Layout

Our page layout is going to be a simple two column layout with the menu in the left hand column and the right hand column will be used to display the content for the menu option chosen. A CSS framework could be used for the columns, but for this tutorial we will stay with custom CSS classes instead of creating a dependency on a third part library.

In the App.svelte page add the following styles. Svelte allows styles to be applied per component.

<style

.row {

  display: flex;

  flex-direction: row;

  flex-wrap: wrap;

  width: 100%;

}




.column {

  display: flex;

  flex-direction: column;

  flex-basis: 100%;

  flex: 1;

}

</style>>

Now that we have the styles for a 2 column layout lets create the html for the page. Initially the 2 columns will just display simple headers, but we will replace these later as we build our routes.

Replace all the HTML 

<main

    <div class="row">

        <div class="column">

            <h1>Menu</h1>

        </div>

        <div class="column">

            <h1>Content</h1>

        </div>

    </div>

</main>>

If you are running the development server you should now see the two column display with the headers.

Create a Menu

Let us add a menu in the left hand column using anchor links, replace the text in the first column with our menu

<h1>Menu</h1

            <a href="#home">Home</a>

            <a href="#red">Red Page</a>

            <a href="#green">Green Page</a>>

This menu uses Location hashes to define the page to be display. When the user click s a menu option the page URL will update to include the hash value. We will then get the hash value from the URL, and based on the selected menu option we will display the relevant page

Get the menu selection

To get the menu selection we need to get the page from the current page.

<script

let page = document.location.hash;

</script>>

This will extract the page hash from the URL, but we also need to get the page whenever it changes such as when the user selects a menu option

window.onpopstate = function(event) {

        page = document.location.hash;

    };

Now the current location hash is in our page variable both when the user accesses our page with an existing hash value (such as from a bookmark) and when the user clicks one of the menu options.

Add the Routing

Based on the page variable we now want to change the content in the right hand pane based on the page that was selected. Replace the Content column contents with this script

{#if page==="#home"

    Home Page

{:else if page === "#red"}

    Red page

{:else if page === "#green"}

    Green page

{:else}

    404: Page not Found

{/if}}

In the else section we can choose to show whatever page we want to show. In the example above we show an error page, we could have easily shown the home page, maybe even passing a property through to indicate to the user that the request ed page does not exist

Now when you click on the menu options the content on the right will change to display the selected content. At the moment the content is just a simple text string but could be replaced by another component.

Creating the Red Page

To show that routing can be triggered from anywhere we can add a hash link on any page and the routing will still pick it up.

Create a new component called red.svelte. Add the following to the component

<div

    <h1>This is the Red page</h1>

    <a href=”#green”>Change to Green Page</a>

</div>




<style>

div {

    background: red;

    color: white;

}

</style>>

Now replace the “Red page” text in the main page with <Red /> (remember to import the red page into the file).

Now choosing red from the menu will diaply the red page content, and from the red page selecting the option to change t the green page will display the green page, as though the green menu option was selected.

Source code available on Github: ReallySimpleSvelteRouting

Mainframe to Cloud – a short history

To understand Cloud technologies we need to understand the older technologies. Cloud is often considered an evolution of existing technologies rather than a brand new technology and start building software in a cloud-first view, rather than how it compares to older technologies. But, as the cloud evolved from older architectures the comparison will always remain.

So a short history lesson:

In the beginning was the mainframe, a single large computing hub, with terminals (screen and keyboard) that were distributed where people sat and did their work. While it looked like people were doing work at a computer at their desk the computer was actually in the computer room and they were working on what was termed a dumb terminal as it had no processing capability of its own.

Then came personal computers which moved the processing to the desktop. Personal computers were often linked to a bigger computer in the backend but had processing capabilities of their own, CPU, memory, and storage. The personal computer originally had all the programs installed as applications on the device. These programs would then make network calls to a database sitting somewhere else that would answer queries that the application would display to the user. This was called client-server (client = desktop, server = database)

Client-server then evolved into a three-tier technology. Some of the processing that the desktop was doing was moved to a server, so the client application started displaying information to the user instead of processing the data, processing (and database access) was handled by the server layer. If this sounds similar to the web, in a way it was, but instead of using a browser you would have had a custom-built interface doing the display.

But, the internet and browsers were the next progression, where instead of installing a client on your machine you could just access the application through a web browser. Originally the web browser would access business applications installed on a web server within the companies own data center (the same data center where the mainframe used to live). The database was also hosted within the data center, so everything was on-premise managed by the companies own IT staff.

Now this is where the cloud comes in ????

Cloud providers started making the servers available in their own data centers and allowed other companies to buy access to the servers. The servers remained in the cloud providers’ ownership and companies rented them. The original servers were just the same as the servers that IT was installing in the data center so this was Infrastructure as a Service (IaaS).

But many companies did not want to worry about the installation of server software on the IaaS servers they were renting so the Cloud providers started doing the installation themselves and sold access to the software level for hosting applications rather than to the whole server, in other words, they started selling the platform for applications (PaaS).

Big software companies were still selling software to companies who were then installing the software either on-premise or at the cloud data center. But this still required the company to have its own IT team who understood the software. So many big software companies started selling the installation services as part of their offering so in fact, the customer was only buying the software pre-installed somewhere in the world (SaaS).

PaaS still linked the software developers were making to a specific platform. So, cloud providers started allowing the upload of just source code that could be run when required. These functions were therefore independent of a specific platform people had to rent and would run and be charged for only when used (Faas). The advent of FaaS has also started the term of ServerLess computing. ServerLess is the ability to write and deploy code without ever having to worry about the Infrastructure or Platform the application is running on. This allows developers to write code and load it to the cloud and the whole system works without anyone knowing where it is actually installed.

Cloud providers have now started making many other platforms available to companies. For example, containers can be run on the cloud, or machine learning training can be done in the cloud. Each of these becomes a new service and could be abbreviated to <X>aaS e.g. A.I.aaS or Containers as a service.

Cloud providers are continually adding new services. We have already run out of <X> letters for services and only a few are ‘official’ abbreviations anyway. As the cloud expands we will be provided with new services all the time, as IT professionals we need to be aware of as many services as possible, though it will be impossible to know them all.

Custom Bootstrap using Sass

I’ve known about SASS for a long time but never took the time to learn how to use it. For a new project (a University project) I wanted to use Bootstrap but also wanted custom colors on the website. Instead of creating my own CSS I decided to stick with Bootstrap but learnt to modify it for my own color palette.

The project is around COVID-19 and medical testing. As such I wanted to use a custom pallet of

Green representing Health

Blue standing for Medical personnel (Professions)

Yellow for Happiness

I spent some time working out how to achieve this in Bootstrap’s provided Scss files. It was so easy that I felt a bit of a fool that I had not learnt this earlier!

Prerequisites:

Before following this guide, you will need to have Nodejs with NPM installed.

To start lets create a webpage that displays some buttons using Bootstrap. Download boot into the web project (I downloaded Bootstrap sources from https://getbootstrap.com/docs/4.5/getting-started/download/ ). Unzip the sources into the webpage directory.

No alt text provided for this image

My HTMl file contains

<html>

   <head>

       <link rel="stylesheet" href=" bootstrap-4.5.3\dist\css bootstrap.css" />

   </head>

   <body>

       <h1>Custom Bootstrap using Sass</h1>

       <span class="btn btn-primary">Primary</span>

       <span class="btn btn-secondary">Secondary</span>

       <span class="btn btn-danger">Danger</span>

       <span class="btn btn-health">Health</span>

       <span class="btn btn-prof">Prof</span>

       <span class="btn btn-happy">Happy</span>

   </body>

</html>

Which, when viewed in a browser looks like:

No alt text provided for this image

A standard looking Bootstrap example of buttons. The last 3 buttons have classes that do not exist so just get ignored and the standard class=”btn” is used for them.

To start customizing the Bootstrap css we need to install the sass compiler. Using NPM run the following command

npm install -g sass

This installs SASS globally so we can run it from any project. Now I can start customizing boostrap to meet my requirements. First I modify the _variables.scss file found in the bootstrap scss folder. Before I get to custom colors, I’ll just change the primary color to check that my configuration is working correctly:

No alt text provided for this image

For my test I am going to change the primary color to red instead of the normal Bootstrap blue.

Change line 67 to

$primary:       $red !default;

Save the file and now I can compile the Scss into css. In the terminal I run

sass C:\projects\Sass1\bootstrap-4.5.3\scss\bootstrap.scss C:\projects\Sass1\bootstrap-4.5.3\dist\css\bootstrap.css

This command overwrites the originally downloaded CSS file. You could instead place the output file anywhere, for example in a \css directory in your project

Now refresh the web page

No alt text provided for this image

We have effectively changed all *-primary classes to be red.

How about creating our own button classes. Would it not be easier to use class btn-happy for a yellow button instead of trying to remember if the primary or secondary is yellow.

In _variables.scss add the following lines after $dark (at line 75).

$health: $green;
$prof: $blue;

$happy: $yellow;

In theme-colors in the section below add

 "health": $health,

    "prof": $prof,

    "happy": $happy

So that theme-colors looks like

$theme-colors: () !default;
$theme-colors: map-merge(
  (
    "primary":    $primary,
    "secondary":  $secondary,
    "success":    $success,
    "info":       $info,
    "warning":    $warning,
    "danger":     $danger,
    "light":      $light,
    "dark":       $dark,
    "health": $health,
    "prof": $prof,
    "happy": $happy
  ),
  $theme-colors
);

Save the changes and compile Scss to css again

When I refresh the web page I can see my custom colored buttons, using my own class names.

No alt text provided for this image

I now have buttons with my custom colors. In fact all the Bootstrap components that use the color pallet are now customized. To show this I can display an alert

    <div class="alert alert-happy mt-5" role="alert">

            A simple Happy alert—check it out!

        </div>

When we refresh the page:

No alt text provided for this image

Note also how the compilation process did the same color modulation of yellow to fit the color to suit the other alert colors.

Personally, I like the Bootstrap color theme. But being able to modify it to suit my own requirements makes it a lot easier to customize and use wherever I need it. For the website I am designing I don’t have Primary and Secondary colors. I have three primary colors called Health, Prof and Happy.

Personal Projects (Passion, Bugs and DevOps)

One question I have not been asked in a Job Interview is to discuss my “Personal Projects”. On github.com I have 21 public repositories and 5 private repositories. (I should add at least another 3 private repositories and a public repository based on my current personal projects)

Personal Projects show a developer’s passion for his craft. While at work you are told what languages, frameworks and libraries to use. On your personal projects you are free to explore the wild open expanse of developer options. To be honest I believe 80% of what I have learnt as a developer has been due to working on my personal projects.

If I have learnt 80% of my skills on personal projects, why is this not a question in interviews to find out what people are really teaching themselves?

The stage of development of a personal project is also quite important. If a personal project is being done for learning it’s stage is unimportant. But if a developer has a personal project that has been released for common consumption it probably means the developer has learnt a lot about software release management, software quality i.e. it is likely the developer worries about bugs and bug management!

If personal projects can help a person learn about software quality and bug management why is it not used in interviews to judge a developers commitment to quality?

Github.com now has github actions. Github actions can be used to build CI/CD pipelines. If a developer is using github actions for build and deployment they understand the basics of DevOps. (I dont yet use github actions, but it is on my todo list.)

If by asking about a developers personal projects we can find out about their knowledge, belief in and use of DevOps why arnt we asking about it in interviews?

For any developer looking for a job, your Github.com repositories are a part of your CV. Publish your broken attempts at making things work, publish your pet projects, work on other developers repositories, and make use of the tools available. Tell everyone who is interested in your projects (probably only Geeks like myself want to know what you are working on, but tell everyone anyway). Use your personal projects to show potential employers what you are capable of.

PS. I actually have been asked about my personal projects before – One interview I did was basically a comprehensive code review of one of my public projects. Based on that experience I make sure I keep updating my active repos and adding new repo’s as I learn new things.

PPS. Please send me a link, or comment below with a link, to your own github profile. I’d love to see what people are working on 🙂

PPPS. http://www.github.com/cairnswm

Docker, why I should use it

TL;DR; Because Docker is cool! Actually, really cool because Docker enables DevOps for Infrastructure as Code.

When I develop code its on my own laptop. Typically running in a local web server with the back end, front end and database all running close together. Very seldom does this match what we experience when we take our systems into production.

My production environment typically consists of a number of back end servers fronted by a load balancer, possibly with auto scaling functionality. The database is running somewhere else, possibly in a server-less cloud environment. The front end, where possible, is hosted on static storage to best serve as many end users as possible. In fact the whole environment could be server-less if its on a cloud provider.

So matching our local environment to look and behave like our production environment is really difficult.

Docker to the rescue. Docker allows me to start up multiple “servers” all on my own laptop. So I can easily create 2 or 3 back-end servers. Another server to host my front-end (statically). I can start up my database on its own server so it looks like it is remote and serverless. All this can be accomplished with a few configuration files that start it all up for me when I need it. Along with Docker we can start up Kubernetes locally to do our auto scaling.

So if I have configuration files to start up my “production” environment I am effectively doing Infrastructure as code. If I want to test it out on a different operating system I just update the config files and away I go.

If I want to be really clever I use Terraform as my Infrastructure as Code scripting language, store it in a Git repository, automate the process with Git hooks to restart the whole environment when I change my Terraform scripts. And suddenly I have a DevOps ecosystem running locally on my own Laptop. Now that is Cool!

Now all I need is a Laptop from work that can run my Docker farm!

PS. Preferably an i7, 8 cores, and 32GB of RAM please, and no I don’t need a touch screen thank you very much.

PPS. Actually 64GB of ram would be even better! (Because my 24GB home laptop still doesn’t like running more than 10 docker instances at a time!)

These are my opinions made in my personal capacity, and may not match those of my employer.

Why I choose PHP and JavaScript

No alt text provided for this image

I’m a professional software developer but choose to use PHP and JavaScript for my “personal” projects. When other software development professionals hear this I often get asked why, because PHP has “no future”. 

It is all about ease of use! Getting a local development server up and running on my new laptop takes about 5 minutes. I just download XAMPP and run an install, for tooling I download VS Code (also free) and I can be developing new code 10 minutes after I open my brand new laptop. Best of all it is completely free! 

But, I hear my colleagues say, you could use the cloud for Nodejs, C#, Java etc! Yes I could but those are 1. Not as easy to setup and 2. Not quite as free. If I develop something that has financial possibilities I can upload it onto a basic web hosting site for R40 per month. If and when it becomes a success I can then move it to a real hosting environment. 

No alt text provided for this image

But, I hear them say again, you can set up a free web application on Azure or a t2.micro on AWS. Again, I agree I could, but then I need to worry about the OS, or the hosting platform, and then I need to check my security so that I can access my MySQL database from my local development machine. With my friendly local hosting provider I get a pre setup FTP account, a click of a button for a MySQL database that I can access from my local machine. 

But, yet again <rolls eyes>, that will never be as secure. I agree it isn’t, but so far it’s a simple little idea I was testing out, it is not a super secret app that has my banking details on it. 

IF and WHEN I get an idea that works, then taking the time and effort to configure a secure, elastic, load balanced and expensive environment will become worth while. 

Code like a Unit Tester

On Tuesday night (2019\05\14) I did a presentation at the Developer User Group. My talk was titled Code like a Unit Tester and focused on the different coding style required to write code that can be unit tested. Overall the presentation took about 7 minutes and I had a good few chats afterwards with people interested in improving their skills in the unit testing area.