AI Art

AI art refers to the creation of visual art using artificial intelligence algorithms. The use of AI algorithms in creating art has generated a lot of discussion and debate in the art world, with one of the critical questions being whether AI will replace artists.

When it comes to what makes art good, is it the technique used to create the artwork or the ability to lay out the scene, characters, and items in a pleasing way? Both of these elements are important in creating successful artwork, but there is a growing consensus that the composition of the work is what makes it truly good.

Artists have a unique ability to understand what makes a composition pleasing to the eye. They have a deep understanding of color theory, balance, and proportion. They can create compositions that evoke emotions and tell a story. In contrast, AI algorithms can create art that is technically sound, but it often lacks the emotional depth and narrative that artists can bring to their work.

I don’t beleive AI art will replace artists. While AI algorithms can be used to create visually stunning artwork, they lack the emotional depth and narrative that artists can bring to their work. Artists will always excel in knowing what makes a pleasing composition, and their skills and expertise cannot be replicated by machines.

So, AI art is an exciting development in the world of art (and AI), but it should not be seen as a replacement for artists. Artists will always play a crucial role in the creation of art that evokes emotions and tells a story. The unique abilities of artists will always be essential in creating truly great artwork.

AI is a Tool, like any other

Artificial Intelligence (AI) is becoming an increasingly important tool in the modern world of writing, changing the way articles are written and published. However, just like any other tool, its use is often the subject of debate. In this article, we’ll explore the role of AI as a tool in article writing, and why we don’t need to make a fuss about its use.


First, let’s consider the argument that people need to be aware of the limitations of AI and understand how it’s being used to influence their decisions. This is a valid concern, but it’s not unique to AI. The same argument could be made for any tool that’s used in article writing, such as Microsoft Word or the pen or pencil used to write the article.


When we write an article, for example, it’s not necessary to announce the make of pen or pencil used, or the software used to format the manuscript. The same applies to AI. If we’re using AI to help us write an article, it’s not necessary to make a big deal about it. The most important thing is the content of the article itself, not the tools used to create it.


Another argument is that AI is more powerful than other tools and therefore, it’s important to be aware of its limitations. While it’s true that AI is a powerful tool, it’s still just a tool. It’s not making decisions on its own, but instead, it’s being used by people to make decisions. The limitations of AI should be considered in the same way as the limitations of any other tool.


Finally, it’s worth noting that the use of AI can lead to increased efficiency and effectiveness in article writing. Just like any other tool, AI can be used to automate repetitive tasks, freeing up time for more valuable work. AI can also help us to analyze data and make better decisions, making our writing more accurate and effective.


AI is just another tool in the world of writing, and its use should not be cause for concern. The most important thing is how the tool is being used, not whether or not it’s AI.

ChatGPT by OpenAI

ChatGPT and GPT-3 by OpenAI,are trained AI models and can generate text on various topics, including answers to education homework. Through my observations, I have come to the conclusion that it’s time for us to rethink the way we approach AI in education.

Traditionally, the focus in education has been on teaching children how to perform specific tasks, such as writing an essay or solving a math problem. While these skills are important, they don’t necessarily prepare students for the future. In an era where AI is becoming increasingly prevalent, it’s crucial that we teach children how to think, not just what to do.

One of the main reasons for this shift is that AI has the potential to automate many of the tasks that students are currently taught to do. In the near future, machines may be able to write essays and solve math problems faster and more accurately than humans. This means that the skills that students are learning today may become obsolete in the future.

Instead of teaching students how to do specific tasks, we should be teaching them how to think critically and creatively. These are skills that are unlikely to be automated by AI and will become increasingly valuable as technology continues to advance. By teaching children how to think, we are preparing them for a future in which they can adapt and thrive, no matter what changes come their way.

In addition to being future-proof, teaching children how to think also has numerous other benefits. It helps them to develop problem-solving skills, encourages creativity and innovation, and promotes independent thinking. These are all skills that are essential in the modern world and will help students to succeed in any field they choose to pursue.

It is time we embrace the exciting potential of AI in education, it’s crucial that we rethink our approach. Instead of teaching children how to do specific tasks, we should be teaching them how to think. By doing so, we are preparing them for a future in which they can thrive and succeed, no matter what changes come their way.

(Yes, most of this post was written by AI)

Mainframe to Cloud – a short history

To understand Cloud technologies we need to understand the older technologies. Cloud is often considered an evolution of existing technologies rather than a brand new technology and start building software in a cloud-first view, rather than how it compares to older technologies. But, as the cloud evolved from older architectures the comparison will always remain.

So a short history lesson:

In the beginning was the mainframe, a single large computing hub, with terminals (screen and keyboard) that were distributed where people sat and did their work. While it looked like people were doing work at a computer at their desk the computer was actually in the computer room and they were working on what was termed a dumb terminal as it had no processing capability of its own.

Then came personal computers which moved the processing to the desktop. Personal computers were often linked to a bigger computer in the backend but had processing capabilities of their own, CPU, memory, and storage. The personal computer originally had all the programs installed as applications on the device. These programs would then make network calls to a database sitting somewhere else that would answer queries that the application would display to the user. This was called client-server (client = desktop, server = database)

Client-server then evolved into a three-tier technology. Some of the processing that the desktop was doing was moved to a server, so the client application started displaying information to the user instead of processing the data, processing (and database access) was handled by the server layer. If this sounds similar to the web, in a way it was, but instead of using a browser you would have had a custom-built interface doing the display.

But, the internet and browsers were the next progression, where instead of installing a client on your machine you could just access the application through a web browser. Originally the web browser would access business applications installed on a web server within the companies own data center (the same data center where the mainframe used to live). The database was also hosted within the data center, so everything was on-premise managed by the companies own IT staff.

Now this is where the cloud comes in ????

Cloud providers started making the servers available in their own data centers and allowed other companies to buy access to the servers. The servers remained in the cloud providers’ ownership and companies rented them. The original servers were just the same as the servers that IT was installing in the data center so this was Infrastructure as a Service (IaaS).

But many companies did not want to worry about the installation of server software on the IaaS servers they were renting so the Cloud providers started doing the installation themselves and sold access to the software level for hosting applications rather than to the whole server, in other words, they started selling the platform for applications (PaaS).

Big software companies were still selling software to companies who were then installing the software either on-premise or at the cloud data center. But this still required the company to have its own IT team who understood the software. So many big software companies started selling the installation services as part of their offering so in fact, the customer was only buying the software pre-installed somewhere in the world (SaaS).

PaaS still linked the software developers were making to a specific platform. So, cloud providers started allowing the upload of just source code that could be run when required. These functions were therefore independent of a specific platform people had to rent and would run and be charged for only when used (Faas). The advent of FaaS has also started the term of ServerLess computing. ServerLess is the ability to write and deploy code without ever having to worry about the Infrastructure or Platform the application is running on. This allows developers to write code and load it to the cloud and the whole system works without anyone knowing where it is actually installed.

Cloud providers have now started making many other platforms available to companies. For example, containers can be run on the cloud, or machine learning training can be done in the cloud. Each of these becomes a new service and could be abbreviated to <X>aaS e.g. A.I.aaS or Containers as a service.

Cloud providers are continually adding new services. We have already run out of <X> letters for services and only a few are ‘official’ abbreviations anyway. As the cloud expands we will be provided with new services all the time, as IT professionals we need to be aware of as many services as possible, though it will be impossible to know them all.

Personal Projects (Passion, Bugs and DevOps)

One question I have not been asked in a Job Interview is to discuss my “Personal Projects”. On github.com I have 21 public repositories and 5 private repositories. (I should add at least another 3 private repositories and a public repository based on my current personal projects)

Personal Projects show a developer’s passion for his craft. While at work you are told what languages, frameworks and libraries to use. On your personal projects you are free to explore the wild open expanse of developer options. To be honest I believe 80% of what I have learnt as a developer has been due to working on my personal projects.

If I have learnt 80% of my skills on personal projects, why is this not a question in interviews to find out what people are really teaching themselves?

The stage of development of a personal project is also quite important. If a personal project is being done for learning it’s stage is unimportant. But if a developer has a personal project that has been released for common consumption it probably means the developer has learnt a lot about software release management, software quality i.e. it is likely the developer worries about bugs and bug management!

If personal projects can help a person learn about software quality and bug management why is it not used in interviews to judge a developers commitment to quality?

Github.com now has github actions. Github actions can be used to build CI/CD pipelines. If a developer is using github actions for build and deployment they understand the basics of DevOps. (I dont yet use github actions, but it is on my todo list.)

If by asking about a developers personal projects we can find out about their knowledge, belief in and use of DevOps why arnt we asking about it in interviews?

For any developer looking for a job, your Github.com repositories are a part of your CV. Publish your broken attempts at making things work, publish your pet projects, work on other developers repositories, and make use of the tools available. Tell everyone who is interested in your projects (probably only Geeks like myself want to know what you are working on, but tell everyone anyway). Use your personal projects to show potential employers what you are capable of.

PS. I actually have been asked about my personal projects before – One interview I did was basically a comprehensive code review of one of my public projects. Based on that experience I make sure I keep updating my active repos and adding new repo’s as I learn new things.

PPS. Please send me a link, or comment below with a link, to your own github profile. I’d love to see what people are working on 🙂

PPPS. http://www.github.com/cairnswm

Docker, why I should use it

TL;DR; Because Docker is cool! Actually, really cool because Docker enables DevOps for Infrastructure as Code.

When I develop code its on my own laptop. Typically running in a local web server with the back end, front end and database all running close together. Very seldom does this match what we experience when we take our systems into production.

My production environment typically consists of a number of back end servers fronted by a load balancer, possibly with auto scaling functionality. The database is running somewhere else, possibly in a server-less cloud environment. The front end, where possible, is hosted on static storage to best serve as many end users as possible. In fact the whole environment could be server-less if its on a cloud provider.

So matching our local environment to look and behave like our production environment is really difficult.

Docker to the rescue. Docker allows me to start up multiple “servers” all on my own laptop. So I can easily create 2 or 3 back-end servers. Another server to host my front-end (statically). I can start up my database on its own server so it looks like it is remote and serverless. All this can be accomplished with a few configuration files that start it all up for me when I need it. Along with Docker we can start up Kubernetes locally to do our auto scaling.

So if I have configuration files to start up my “production” environment I am effectively doing Infrastructure as code. If I want to test it out on a different operating system I just update the config files and away I go.

If I want to be really clever I use Terraform as my Infrastructure as Code scripting language, store it in a Git repository, automate the process with Git hooks to restart the whole environment when I change my Terraform scripts. And suddenly I have a DevOps ecosystem running locally on my own Laptop. Now that is Cool!

Now all I need is a Laptop from work that can run my Docker farm!

PS. Preferably an i7, 8 cores, and 32GB of RAM please, and no I don’t need a touch screen thank you very much.

PPS. Actually 64GB of ram would be even better! (Because my 24GB home laptop still doesn’t like running more than 10 docker instances at a time!)

These are my opinions made in my personal capacity, and may not match those of my employer.

Why I choose PHP and JavaScript

No alt text provided for this image

I’m a professional software developer but choose to use PHP and JavaScript for my “personal” projects. When other software development professionals hear this I often get asked why, because PHP has “no future”. 

It is all about ease of use! Getting a local development server up and running on my new laptop takes about 5 minutes. I just download XAMPP and run an install, for tooling I download VS Code (also free) and I can be developing new code 10 minutes after I open my brand new laptop. Best of all it is completely free! 

But, I hear my colleagues say, you could use the cloud for Nodejs, C#, Java etc! Yes I could but those are 1. Not as easy to setup and 2. Not quite as free. If I develop something that has financial possibilities I can upload it onto a basic web hosting site for R40 per month. If and when it becomes a success I can then move it to a real hosting environment. 

No alt text provided for this image

But, I hear them say again, you can set up a free web application on Azure or a t2.micro on AWS. Again, I agree I could, but then I need to worry about the OS, or the hosting platform, and then I need to check my security so that I can access my MySQL database from my local development machine. With my friendly local hosting provider I get a pre setup FTP account, a click of a button for a MySQL database that I can access from my local machine. 

But, yet again <rolls eyes>, that will never be as secure. I agree it isn’t, but so far it’s a simple little idea I was testing out, it is not a super secret app that has my banking details on it. 

IF and WHEN I get an idea that works, then taking the time and effort to configure a secure, elastic, load balanced and expensive environment will become worth while. 

Knowledge vs Skill

So Ive decided to start looking for a new job. While reading through job adverts on http://weworkremotely.com I realized that while I have a lot of knowledge around my area of specialization (Software Development) I am no longer skilled sufficiently to be a software developer.

That doesn’t mean I can be a software developer, but it does mean that I wont pass a skills test on my preferred development tools, for example JavaScript: I know what is possible in JavaScript and can look it up when I need to, but I cant write a test on JavaScript and pass.

Knowledge, Belief and Faith

I now understand the difference between Knowledge, Belief and Faith….

While riding a 24 hour Mountain bike race (we did laps for 24 hours), on the first lap I was riding behind 2 other riders when we approached a pipe that went over the trail. As I was being I could see there was a large gap between the cyclists heads and the pipe.

So I had the knowledge that the pipe was well above head height.

When I went under the pipe I ducked anyway.

So I did not Believe it.

After a few laps I stopped ducking.

Now I truly believed that the pipe was way above my head height.

Another few laps I realized I was no longer even looking at the pipe.

I now had faith that the Pipe would not move and therefore I would always have space under the pipe.