Since Implementing a Vignette, I've been pretty involved in getting various machine learning algorithms to, well learn...

I've been playing around with SIFT feature extractors, Histogram of gradients (HOG) and convolutional neural networks (CNNs) and at times it's been quite interesting.

When I started learning about machine learning techniques, I wouldn't say machine learning was of immediate interest to me. I think ever since I did a course on Data management and Analysis, I kinda thought designing software was more my thing. Sure the graphs were cool though, and I like graphs but I think too much data fiddling well just becomes too much data fiddling. That said, I did not learn machine learning and the closest I got to classification was K-nearest neighbour clustering techniques.

With machine learning, the mathematics however is quite interesting, especially the partial derivative calculations that help you determine the impact that model weights are having on the loss function of your model (backpropagation). I did have to write them out by hand initially because otherwise I just would not understand it.

After you understand this, you start to understand that parts of machine learning are very much a brute force, nudge-it-until-its-correct sort of discipline - which is effective but this is an over-simplification of course, and there are more smarts involved.

What is really quite impressive is that pytorch has a built-in Tensor type that will track how each tensor's value impacts an expression that involves that tensor - and calculating the impact of the tensor on say loss function is just a matter of calling backward() and the entire object hierarchy involved in the expression is evaluated for its impact on the expression. Quite cool. This helps not having to worry about trying to calculate the chain rule manually on a piece of paper! Also, found it pretty cool how easy it was to move Tensors to the GPU to speed up training times.

With this, I've pretty much swapped Ruby for Python which has become almost an extension of me lately (same goes for C++ but more on that later). Interestingly while designing my feature pipeline, I found that python has no concept of private members and the convention is just to use double underscore in front of the method name. Abstract classes exist which was useful, as I designed a pipeline (which is basically this) that is based on interfaces that allow uniform interaction but allowing varying underlying implementation details. 

My pipeline currently consists of 2 classifiers (SVM - Support Vector Machines and MLP - Multi-Layer Perceptron) and 2 feature extractors (SIFT and HOG) and one convolutional neural network (CNN) based on MobileNetV2.

The CNN I originally designed from scratch needed too much training than I had time to do and so the learning rate was poor. So I've been fine-tuning this one using pre-trained weights and I've just adapted it to learn the classes that I'm interested in.

I will say that I particularly enjoyed the learning around machine learning theory, for example understanding why a non-linear function (Sigmoid or ReLu for example) is used after linearly combining the input values, which is to produce a variance in the shape of the function which contributes to determining a function that best describes the input.

To this extent, this book was particularly useful in understanding 'why' and not just doing it and moving on which is so often the case with technical theory - not that this book was technical it was more practical and provided diagrams like this one - which my brain seems to appreciate. I wouldn't say I'm proficient however but I'm interested which was more than I could say before. 

Pity I don't have an NVidia graphics card so I've been having to use the GPU in google colab and I hit the limit a few times while training but eventually got 83% validation accuracy which is pretty good. 

Apart from that I've also been writing a lot of C++/OpenGL and finished a demo racing game, which I very much enjoyed programming. Its very simple but shows various important 3D graphical elements.

I've implemented an exponential fog effect and my scene is basically themed on Jurrasic park-styled atmosphere.

I've incorporated some meshes for the player car, the forest and the track. The path through the scene is calculated using catmul rom splines and the rear-view mirror is programed using a FrameBuffer object.

Most of the shader code is for the lighting effects and the fog. For the lighting, the Phong-Blinn model is used. Its been very interesting managing the vertex buffers and drawing 3D primitives etc. 

What I'd like to do next is incorporate the library of code that I developed for the demo into a more abstract utility that I can use in the next thing I do. 

In terms of Information Security, I wrote a critical review about securing software development processes as recommended by Clause 14 in ISO 27001 around the utility of implementing E-SecSDM, an engineering-focused software development model to improve the security of engineering systems that incorporate software in their design.

I think the learning I did on digital forensics, criminal law and network security previously was a bit more exciting than learning about ISO 27001/2 but like most things, its useful to know a little more than you did before so in this way, its useful.

I've been out and about running also - that knee niggle has seemed to have sorted itself out (well, I did implement a no-run policy for about 2 months) however my fitness has dropped off but that's ok - I've been slowly working my way back up. My last couple of runs were slow but they were pretty nice especially now as the sun is starting to come out. 

Speaking of which, maybe I should go for a run. now....