Sidebar Menu

Projects

  • Dashboard
  • Research Project
  • Milestones
  • Repository
  • Tasks
  • Time Tracking
  • Designs
  • Forum
  • Users
  • Activities

Login

  • Login
  • Webmail
  • Admin
  • Downloads
  • Research

Twitter

Tweets by stumathews
Stuart Mathews
  • Home
  • Blog
  • Code
  • Running
  • Gaming
  • Research
  • About
    • Portfolio
    • Info
Details
Category: Blog
By Stuart Mathews
Stuart Mathews
20.Jul
20 July 2025
Last Updated: 02 August 2025
Hits: 526

Thoughts on Bayesian networks

Since Thoughts on Reinforcement learning and after reading that paper on DQN and being a bit more sure about how reinforcement learning is implemented algorithmically (Bellman update), I started wondering about other unrelated things, like what a Bayesian networks is.

I've seen references to Bayesian networks in literature I've read without having an intuitive understanding of what it is and and how they work and, more importantly, what applicability they might have to me in general - because why not know? I've also felt this way about probability and Markov Chains, delving into aspects about probability distributions, Hidden Markov chains (HMM), Markov decision processes (MDP) and this ultimately lead me to Bayesian networks probably (no pun intended!) because it also has to do with probability. Also, I had recently conducted a research task for Brunel where I needed to review papers on types and applications of Deep Neural Networks (DNN) which are very much grounded in probability which is probably where this whole foray in learning about probability probably started from. However, I digress...

Why I've been interested Bayesian networks is because they are said to be usable to make intuitive decisions in machines/computers.

Specifically, they allow for decisions to be made in a way similar to how humans might make decisions by indirectly inferring certain situations despite not directly witnessing the situation, i.e they use other indications or conditions that the situation depends on (in certain degrees) as a means to suggest that the situation is occurring. They do this systematically, where humans do it more intuitively or perhaps even superstitiously.

This is interesting if you'd like to simulate decision making in a more human way within an artificial entity such as a game character or a robot, for example. The key is that it can be achieved through a systematic, well-definable process (which is what machines like, and what can be implemented as an algorithm) and it produces human-like behaviour (as a result of seemingly plausible human-like decision making process) which is what we'd like to achieve in a simulated artificial intelligent entity. 

While I've suggested that bayesian networks can be used to help make decisions (I'm not going to explaining exactly how yet), they can also be used to learn and indicate/detect the probability that a past situation is currently occurring despite only knowing some aspects about the situation right now.

It learns by gaining more experience about the make up of historical situational data, i.e what the conditions were when situations occurred, and uses the frequency of certain situational aspects as a means to predict the situation when only some of the situational aspects are known at this very moment. This means that at this very moment, you can predict if currently the situations is occurring with only fragments of knowledge about the situation. 

The more experience you gain of the conditions of the situations, the more accurate the prediction will be when only presented with some of the conditions. It might be challenging to realise the impact of this idea.

For example, these ideas are used by spam detection algorithms. They collect aspects/conditions about emails and ask you to add another condition which indicates if the situation is a spam email or not. As more instances of when those aspects/conditions are marked as spam accumulate in the historical data, this will increase the probability that those aspects/conditions lead to the probability (detection) as spam, specifically when you don't know its spam (but you know the other conditions), and you have historical data where some of these aspects  have contributed to the situation of spam before (in the historical data).

Additionally, if you know more spam-related conditions/aspects about the spam email, the probability of the email being detected as spam increases, i.e as the more you know about the spam conditions, the more likely it will be detected as spam.

This is extremely useful/interesting and incidentally this is also how weather is predicted.

For example, they look at historical data and from it they work out the conditions that contribute to the probability of rain, they then take what they know about today's conditions and this determines how the conditions likely contribute to the probability of rain today. Again, if more knowledge about the conditions that cause rain is known, the more probable the prediction of rain will be.

There is more to be said about how Bayesian networks work, specifically how they are implemented algorithmically and mathematically but this will be reserved for a future article.

  • Learning
  • Agents
  • Bayesian networks
Details
Category: Code
By Stuart Mathews
Stuart Mathews
27.Feb
27 February 2022
Last Updated: 04 January 2023
Hits: 5802

A software engineering process for delivering software

Recently, since Fading importance and the utility of lists, I've been thinking about the top-level approach/process that I use for doing development work, generally.

The main reason was that I was wondering how different other disciplines of development might approach doing work and how much commonality there might be in what I generally do. So I thought it would be interesting to outline my conceptual flow when approaching new work.

A typical strategy that I take to delivering software within the last couple of years, which has focused on being more agile, i.e, exposing and sharing issues more readily and making the strategy design process more collaborative, looks something like this:

My General Process for doing development work
1. Responsibility to deliver is the idea that you are given a task to do and you need to take ownership of getting it done.

This is less a stage and more a statement of fact really. I wanted to put this down because it's an important realization that as a developer, you need to take much of the initiative to develop a solution, and much of the solution comes from the developer's experience and expertise and knowledge. This is the basis of why a developer trains, is so that they can be given the task to deliver software. It's a personal responsibility first and foremost and this is why you were hired. Of course, as a developer, you will integrate with your team and share, explore and discuss solutions but ultimately you are there to carry out much of the construction work.

It is a psychological element where the responsibility to implement a solution is given to the developer. This usually means that much of the burden of its delivery and issues are thought to be offloaded to the engineer. This inevitably starts the process in the developer's mind of what needs to be done and how they should think about doing it. It also suggests that the developer will be the main implementor of whatever the solution is determined to be and that the success of the solution lies with the developer. This is a fair amount of responsibility and inevitably much of it is assumed.

2. Interpreting requirements is about understanding what is being asked to be done.

Generally, this is a process of thinking and rationalizing the requirements and is likely to involve independent research, might extend into social collaboration and discussion if research proves to be less successful etc.

I tend to draw a picture of what is being asked, as this allows me to put down all the elements of the requirements somewhere on the diagram. This helps me ensure that I'm making provision for all the elements in the requirements. My diagram becomes whatever the requirements prompt me to draw. It could be a mind map with arrows, it could be boxes with logical connections etc.

3. Designing ideas is about thinking about and exploring how the requirements can be realised creatively.

This would involve modelling ideas that would result in the delivery of the solution by meeting the requirements. This might expand into considering aspects of the design such as the feasibility, architecture, components, quality etc and of requirements and implementation details that are not defined or realised yet.

This process naturally derives from the initial idea that formed in my head as I was interpreting the requirements. This might manifest as a series on conceptual ideas in my mind which I may or may not diagram. This is very much an exploratory thinking process. Typically I extend my initial diagrams with more arrows, boxes or created new diagrams altogether. 

Here are some examples: example 1 example 2

4. Solution design is about selecting a design from the ideas already explored.

This is about selecting from criteria that makes this design idea better than the rest. It can consider aspects of feasibility, quality, resources.

It is likely to need considering the architecture, concerns and the formulation of the overall design theoretically. It usually considers what is good or time-efficient or other aspects. This will also usually entail considering what technology and technicalities need to be used and be considered.

Example

5. Technical solution implementation is where the programming happens, and is about using specific technology and implementing it to translate the design into the technical solution that works to deliver the requirements.

This involves the technicalities of the technologies being used to develop the solution. This stage is likely to be the most transformative and subject to the most change and where most of the time will be spent. 

  1. Coping with change: Here we are also dealing and coping with unexpected changes and problems during the technical implementation needs to occur. 
  2. Responding and dealing with change: This stage is also where issues and changes must be responded to and reported back to the team and the strategy is readjusted in light of the changes/issues etc.
  3. Verifying and testing: This is where what you've implemented is verified. This is mostly through unit testing, integration testing and manual testing. 

 See Strategy for Designing Software Solutions for an illustration of the process

 Sharing the responsibility with your group

The stages of this process initially are from the developer's (my) perspective who has been tasked with delivering the software solution. This means that at each step the developer is cognitively involved in rationalising the information and tasks at each step to develop a mental blueprint of the upcoming work that he/she needs to deliver. This means that much of the process is internalised by the individual developer, and any artefacts that are externalised/produced are from the developer such as diagrams, documentation etc is initially produced to aid their own understanding and solidify their development methodology.

Personal artefacts are the outcomes of the process carried out by the developer and can include designs/diagrams, documentation and code

While the process is initially a personal approach initiated and implemented by the developer, many outcomes can be exposed to the group, particularly unknowns and concerns that the developer might have picked up while defining their process.

A problem with personal exposure to the process is that the amount of responsibility and burden that is implicitly on the developer grows as the process is followed and more information is uncovered. This is especially true of discovered unknowns that need to be accounted for or potential concerns or shortcomings that the developer realises not only within the solutions but within their own personal skillset for example.

Holding onto these concerns, unknowns and problems can be psychologically taxing as they weigh down on the developer during subsequent stages of the process such as during Technical solution implementation for example. This is where group exposure to these issues can serve as a relief to these burdens, as it can distribute and share the responsibility of dealing with them, including sourcing potential solutions. This also allows for expectations to be readjusted as many issues are likely to impact the Technical solution implementation stage which is of principal concern for the individual developer as well as the group. This is likely to decrease the pressure that the developer faces when having to deliver the technical solution, as it involves more people (team) and distributes some of the burden such that technical solution implementation can proceed with more clarity with the impact that these issues will have on the technical implementation.

In this way, there needs to be a shared notion of the process where the personal process extends into the group and vice versa, however much of the process is initially heavily centred and focused around the engineer's personal ability to define it. It then needs to be fed back into the group to share and initiate feedback that serves both the developer and the group depending on the developer to deliver the software solution.

Group artefacts are resources that allow for exposure and feedback back to the developer's process and can meetings, design/documentation review, project/goal/strategy/planning tracking and code review.

Discussion

Generally, I find it useful to produce an artefact that represents my personal outcomes or thinking, such that they can be shared with the team, both for feedback but also for relieving the pressure from dealing with unknowns/uncertainties/issues etc. This includes taking a strategy that keeps track of my progress as outlined in Fading importance and the utility of lists which can help to quantify your efforts throughout the above process.

I would say a large portion of the process is similar and has parity with the typical software development process where requirement gathering proceeds analysis and which leads to design and then ultimately implementation. However, my process presents this slightly differently as more of a personal development process that is followed by an individual contributor (developer) of the overall software development lifecycle of a product or feature.

Also, in the process, there is an emphasis on the initiative that the developer must have when encountering issues, as the developer is often the first person to become aware of them, and failure to expediently expose issues can result in unnecessary pressure in dealing with them. So in this way, this process factors in these personal concerns. 

The process tracks individual concerns and processes such as research and the need for understanding and idea modelling. It implies that requirements are already gathered and just need to be correctly understood, or if they are poorly gathered, then that it's the developer's responsibility to determine what the requirements are and then understand them in order to develop a solution to deliver it.

I think much of the business analyst role especially for more agile, smaller teams are now largely deprecated and much of this responsibility is falling on the individual experienced developers.

There is also generally the feeling in my mind that more responsibility is required of developers to deliver end-to-end solutions, and I suspect this is because the complexity of software is difficult to adequately distribute across multiple actors such as an analyst, project manager and other developers.

So more of these tasks are taken on by the individual developer and this requires more skills and better communication skills (which arguably the other roles mentioned previously would be more traditionally good at). However, it's arguably easier to distribute the development tasks to other developers, provided the tasks have been defined - which again is usually the work of another developer if indeed the tasks get distributed at all. In some cases, the developer is responsible for the entire end-to-end software solution due to the difficulty in communicating the complexity of the implementation to others in an adequate time, and so the work takes as long as it takes the developer who understands it to implement. 

I think there is a concern in software projects, that due to the complexity of software projects, both theoretically but more problematically at a technical level, it is becoming difficult to speed up development by bringing in new developers as the time to transfer understanding of the complexity is now longer than the time required to deliver the entire solution. So in some cases, solution delivery is not being given strict deadlines but they are progressively being delivered piecemeal as they are being incrementally implemented by the developers who understand the solution/problem/complexity.

In the process outlined, understanding requirements, designing ideas and thinking about how to model and implement them, theoretically and technically is left to the individual developer to satisfy, requiring and drawing knowledge from past experience and training. This suggests that the quality of the developer is being more crucially being recognised as important for their ability to understand, contextualize, rationalize simplify and execute the delivery of solutions in light of the increasing complexities of software development generally.

An interesting discussion that is not touched upon here is how this process differs from the typical game development process, and how either it or this process can lean from each other. For example, is there a similarity to requiring research as part of a task or communicating unknowns to the team, and how is this carried out? I imagine there would be some similarity to non-game development but to what extent or how does it differ?

 

 

  • Software Engineering
Details
Category: Code
By Stuart Mathews
Stuart Mathews
10.Dec
10 December 2021
Last Updated: 27 December 2021
Hits: 4636

Encrypting strings at rest

Since, ISO27001, Machine-Learning and Game dev, I recently wanted to store some sensitive data (private key) in a string in C# and keep it in memory during the duration of an operation.

Due to the design of some 3rd party APIs that required a string representation of the private key, I decided on encrypting the string and only unencrypting it when I want to use it. In all other instances, the encrypted string would be copied or passed around, while the unencrypted string would not be.

I did some reading about System.Security.Cryptography's ProtectedMemory function, which allows you to encrypt a block of bytes of which needs to be a multiple of 16 bytes. The interesting thing about doing this is being able to encode the length of your sensitive string within the actual encrypted 16n byte block so that when you unencrypt that block, you can retrieve from it the length of the original string, and recover the original string. This is kind of what you do when you encode the length of a packet that you send down the network.

The implementation of an object that can encrypt a string, store it internally and decrypt upon request, is a ProtectedString:

using System;
using System.Security.Cryptography;
using System.Text;

namespace X.Y.Z
{
    /// <summary>
    /// Store string encrypted at rest.
    /// </summary>
    /// <remarks>You can copy this object freely</remarks>
    /// <remarks>Portable alternative to SecureString, using DPAPI</remarks>
    /// <remarks>Note SecureString is not recommended for new development</remarks>
    /// <remarks>https://docs.microsoft.com/en-us/dotnet/api/system.security.securestring</remarks>
    public class ProtectedString : IProtectedString
    {
       /// <summary>
       /// Secret area that is encrypted/decrypted
       /// </summary>
       private byte[] _secretData;

       private readonly object _lock = new();

       private bool IsProtected { get; set; }

       /// <summary>
       /// DPAPI access control for securing data
       /// </summary>
       private readonly MemoryProtectionScope _scope;

       /// <summary>
       /// Creates a ProtectedString
       /// </summary>
       /// <param name="sensitiveString">Sensitive string</param>
       /// <param name="scope">Scope of the protection</param>
       public ProtectedString(string sensitiveString = null,
                              MemoryProtectionScope scope = MemoryProtectionScope.SameProcess)
        {
            _scope = scope;

            // Store secret if provided and valid
            if(InputValid(sensitiveString))
                Set(sensitiveString);
        }

       private static bool InputValid(string sensitiveString)
       {
           return sensitiveString != null;
       }

       /// <inheritdoc />
        public void Set(string sensitiveString)
        {
            try
            {
                lock (_lock)
                {
                    if(!InputValid(sensitiveString))
                        throw new InvalidInputException();

                    // The secretData length should be a multiple of 16 bytes
                    
                    var secretDataLength = RoundUp(
                        sizeof(int) + // We will store the length of the
                                      // sensitiveString as the first sizeof(int) bytes in secretData
                        sensitiveString.Length, 16);

                    // Allocate array, all values set to \0 by .Net
                    _secretData = new byte[secretDataLength];

                    // Copy the length of the sensitiveString into the secretData
                    // first, starting at the first byte
                    BitConverter.GetBytes(sensitiveString.Length).CopyTo(_secretData, 0);

                    // Copy the sensitiveString itself after the bytes the above bytes
                    Encoding.ASCII.GetBytes(sensitiveString).CopyTo(_secretData, sizeof(int));

                    // Encrypt our encoded secretData using DPAPI
                    ProtectedMemory.Protect(_secretData, _scope);

                    IsProtected = true;
                }
            }
            catch (Exception e)
            {
                IsProtected = false;

                if (e is ProtectedStringException)
                    throw;

                throw new Exception("Unexpected error while storing data from protected memory");
            }
        }

        /// <inheritdoc />
        public string Get()
        {
            try
            {
                lock (_lock)
                {
                    if (!IsProtected)
                        throw new NotProtectedException();

                    // Decrypt secretData
                    ProtectedMemory.Unprotect(_secretData, _scope);

                    // Determine how long our sensitiveString was by reading the integer at byte 0
                    var secretLength = BitConverter.ToInt32(_secretData, 0);

                    // Read that many bytes to recover the original sensitiveString
                    var sensitiveString = Encoding.ASCII.GetString(_secretData, sizeof(int), secretLength);

                    // Re-protect secretData after retrieval
                    Set(sensitiveString);

                    // Return a reference to unprotected string.
                    return sensitiveString;
                }
            }
            catch (Exception e)
            {
                if (e is ProtectedStringException)
                    throw;

                throw new Exception("Unexpected error while retrieving data from protected memory");
            }
        }

        private static int RoundUp(int numToRound, int multiple)
        {
            if (multiple == 0)
                return numToRound;

            int remainder = numToRound % multiple;
            if (remainder == 0)
                return numToRound;

            return numToRound + multiple - remainder;
        }
    }
}

The question is if this is really useful at all from a security standpoint?

As soon as you unencrypt the contents, you get an unencrypted string back, and that string lives in memory and in theory, can be looked at by memory scanning. Also when that memory is freed (provided you don't have a reference to it), the garbage collector will free it but won't zero it out (securely clear it), so it'll be somewhere in memory, ...unencrypted.

Ultimately I never used this because of the reasons mentioned above, but it's still interesting...

Now, despite this, the above is still useful in some ways, provided you:

  • a) only copy or store the protected string or pass it between functions.
  • b) don't store the unencrypted string anywhere.

The other advantage is that the window of exposure of the unencrypted string is small (but it'll still get garbage collected), as you only unencrypt the ProtectedString when you want to use it, otherwise the secret is encrypted at rest.

Still, it doesn't help with the original problem of having unencrypted string copies lingering in system memory somewhere....

 

  • Encryption
  • C#
  • Software Engineering
  • Design
Details
Category: Blog
By Stuart Mathews
Stuart Mathews
23.Apr
23 April 2021
Last Updated: 26 April 2021
Hits: 12838

ISO27001, Machine-Learning and Game dev

Since Implementing a Vignette, I've been pretty involved in getting various machine learning algorithms to, well learn...

I've been playing around with SIFT feature extractors, Histogram of gradients (HOG) and convolutional neural networks (CNNs) and at times it's been quite interesting.

When I started learning about machine learning techniques, I wouldn't say machine learning was of immediate interest to me. I think ever since I did a course on Data management and Analysis, I kinda thought designing software was more my thing. Sure the graphs were cool though, and I like graphs but I think too much data fiddling well just becomes too much data fiddling. That said, I did not learn machine learning and the closest I got to classification was K-nearest neighbour clustering techniques.

With machine learning, the mathematics however is quite interesting, especially the partial derivative calculations that help you determine the impact that model weights are having on the loss function of your model (backpropagation). I did have to write them out by hand initially because otherwise I just would not understand it.

After you understand this, you start to understand that parts of machine learning are very much a brute force, nudge-it-until-its-correct sort of discipline - which is effective but this is an over-simplification of course, and there are more smarts involved.

What is really quite impressive is that pytorch has a built-in Tensor type that will track how each tensor's value impacts an expression that involves that tensor - and calculating the impact of the tensor on say loss function is just a matter of calling backward() and the entire object hierarchy involved in the expression is evaluated for its impact on the expression. Quite cool. This helps not having to worry about trying to calculate the chain rule manually on a piece of paper! Also, found it pretty cool how easy it was to move Tensors to the GPU to speed up training times.

With this, I've pretty much swapped Ruby for Python which has become almost an extension of me lately (same goes for C++ but more on that later). Interestingly while designing my feature pipeline, I found that python has no concept of private members and the convention is just to use double underscore in front of the method name. Abstract classes exist which was useful, as I designed a pipeline (which is basically this) that is based on interfaces that allow uniform interaction but allowing varying underlying implementation details. 

My pipeline currently consists of 2 classifiers (SVM - Support Vector Machines and MLP - Multi-Layer Perceptron) and 2 feature extractors (SIFT and HOG) and one convolutional neural network (CNN) based on MobileNetV2.

The CNN I originally designed from scratch needed too much training than I had time to do and so the learning rate was poor. So I've been fine-tuning this one using pre-trained weights and I've just adapted it to learn the classes that I'm interested in.

I will say that I particularly enjoyed the learning around machine learning theory, for example understanding why a non-linear function (Sigmoid or ReLu for example) is used after linearly combining the input values, which is to produce a variance in the shape of the function which contributes to determining a function that best describes the input.

To this extent, this book was particularly useful in understanding 'why' and not just doing it and moving on which is so often the case with technical theory - not that this book was technical it was more practical and provided diagrams like this one - which my brain seems to appreciate. I wouldn't say I'm proficient however but I'm interested which was more than I could say before. 

Pity I don't have an NVidia graphics card so I've been having to use the GPU in google colab and I hit the limit a few times while training but eventually got 83% validation accuracy which is pretty good. 

Apart from that I've also been writing a lot of C++/OpenGL and finished a demo racing game, which I very much enjoyed programming. Its very simple but shows various important 3D graphical elements.

I've implemented an exponential fog effect and my scene is basically themed on Jurrasic park-styled atmosphere.

I've incorporated some meshes for the player car, the forest and the track. The path through the scene is calculated using catmul rom splines and the rear-view mirror is programed using a FrameBuffer object.

Most of the shader code is for the lighting effects and the fog. For the lighting, the Phong-Blinn model is used. Its been very interesting managing the vertex buffers and drawing 3D primitives etc. 

What I'd like to do next is incorporate the library of code that I developed for the demo into a more abstract utility that I can use in the next thing I do. 

In terms of Information Security, I wrote a critical review about securing software development processes as recommended by Clause 14 in ISO 27001 around the utility of implementing E-SecSDM, an engineering-focused software development model to improve the security of engineering systems that incorporate software in their design.

I think the learning I did on digital forensics, criminal law and network security previously was a bit more exciting than learning about ISO 27001/2 but like most things, its useful to know a little more than you did before so in this way, its useful.

I've been out and about running also - that knee niggle has seemed to have sorted itself out (well, I did implement a no-run policy for about 2 months) however my fitness has dropped off but that's ok - I've been slowly working my way back up. My last couple of runs were slow but they were pretty nice especially now as the sun is starting to come out. 

Speaking of which, maybe I should go for a run. now....

 

 

  • Game development
  • C++
  • Computer Vision
  • Computer Graphics
  • OpenGL
Details
Category: Code
By Stuart Mathews
Stuart Mathews
26.Jan
26 January 2021
Last Updated: 05 February 2021
Hits: 6019

Implementing a Vignette

I recently had a task to create a vignette of a picture. This is a technique in Computer Vision or digital signal processing whereby as you move closer and closer towards the centre of the image, the pixel intensity increases:  

When first approaching this problem, I could not understand how, from a dense matrix of colour information, you could determine how far the pixel that you were processing was from the centre of the image. I confirmed, that no position information is available within the pixel itself (it just contains raw colour information). Finally, it dawned on me that you could use the coordinates of the image matrix to create a vector to represent the centre point, ie. the offset x and y from an origin.

But where is the origin?

I first thought I'd have to use the centre of the image as the origin, meaning all my pixels co-ordinates would need to be relative to that, which would have been a pain as I'd have to work out how, say each pixel in the matrix[x][y] would probably need to be represented differently when relative to not matrix[0][0] but the centre of the image!

Then I realised that it could keep the origin as [0][0] in the matrix ie image[0,0] for both the centre point and each respective pixel and then it could thus be represented by a vector displacement from that same origin. This was a breakthrough for me. Not only that, you could then generate a new vector for each pixel this way - all using [0,0] in the image matrix to represent a distance of that pixel from the same origin. 

So, now you have two 2D vectors from the same origin, one that points at the centre of the image ie [max_cols/2, max_rows/2] and you have a vector that is [x,y] for each pixel you are currently processing. You can now subtract the vector representing the centre point from the pixel vector you are currently at, ie this would result in the vector between the two, which if you can calculate the magnitude thereof, will be the distance between the pixel you are on and the centre of the screen - ie its the hypotenuse between the two sides (of the two vectors). 

The length of the resulting vector can be easily by passing in the vector to np.linalg.norm() - ie get the norm of the vector ie the (length or magnitude) and this the distance. I guess you could also do this yourself by squaring the components of the vectors, adding them up together and taking the square root. But this is much easier.

Now you can use that distance to drive the intensity value at that pixel!

With the distance(d), you can derive the relative intensity using a function of d and max length of the image to give the intensity i.e brightness(d,img_max) to assign for that distance from the centre. That function produces e raised to the power of the ratio of distance to width. This equation was given and I've represented as the brightness function in the python code below:

\[ f(x,y) = e^{(\frac{-x}{y})}
= \begin{cases}
& \text{x is distance from center} \\
& \text{y is the width of image }
\end{cases} \]

As the image was already in 24-bit RGB colour, I converted it to HSV so that I could manipulate the V component, which corresponds to the intensity. This is not immediately and easily determinable from the RGB data.

I could then manipulate the V component by doing a simple 1d vector x 1d vector multiplication to create a mask that multiplies only the V component with the brightness, leaving the Hue and Saturation intact (I multiplied those by 1(identity)) 

This can be more concisely be represented in this Python script I wrote: 

# Change intensity of pixel colour, depending on the distance the pixel is from the centre of the image

from skimage import data
import numpy as np
import matplotlib.pyplot as plot
from skimage import color, img_as_float, img_as_ubyte
import math

# We can use the built in image of Chelsea the cat
cat = data.chelsea()

# Convert to HSV to be able to manipulate the image intensity(v)
cat_vig = color.rgb2hsv(cat.copy())

print(f'Data type of cat is: {cat_vig.dtype}')
print(f'Shape of cat is: {cat_vig.shape}')

# Get the dimension of the matrix (n-dimensional array of colour information)
[r, c, depth] = cat_vig.shape
v_center = [c / 2, r / 2, 0]

# Derive the pixel intensity from the distance from center
def brightness(radius, image_width):
    return math.exp(-radius / image_width)


# Go through each pixel and calculate its distance from center
# feed this into the brightness function
# modify the intensity component (v) of [h,s,v] for that pixel
def version1(rows, cols, rgb_img, v_center):
    for y in range(rows):
        for x in range(cols):
            me = np.array([x, y, 0])
            dist = np.linalg.norm(v_center - me)
            # alternative:
            cat_vig[y][x] *= [1, 1, brightness(dist, cols)]
            # cat_vign[y][x][2] *= brightness(dist, cols)

# do it
version1(r,c, cat_vig, v_center)

# Convert back to RGB so we can show in imshow()
cat_vig = color.hsv2rgb(cat_vig)
fig, ax = plot.subplots(1, 2)
ax[0].imshow(cat)  # Original version
ax[1].imshow(cat_vig)  # vignette version
plot.show()

This was pretty interesting, and I do love it when theoretical math ie linear algebra applies to practical outcomes!

Q.E.D:

aggressive lick pic.twitter.com/40fZHFY2w1

— Animal Life (@animalIife) January 25, 2021
  • Python
  • Computer Vision
  • Computer Graphics
Details
Category: Running
By Stuart Mathews
Stuart Mathews
04.Dec
04 December 2020
Last Updated: 04 December 2020
Hits: 4496

Fixing the thunder in my feet

I've just come back from a 21km run and it was fantastic and I'm just going to reflect on what worked and how it went to perhaps explore why.

I headed out just after half-past one in a long sleeve top. It was pretty cold outside. The original goal was to run 5km up the hill and onwards a little bit before turning around.

I've been thinking on a couple of my last runs that I should probably slow down a bit more and try to maintain a consistent, albeit slower pace throughout my run. This analysis was done on the back of the last couple of 10Km runs I'd done where I'd noticed that I've got a tendency to speed up in in beginning and also at the end of the run. 

The net effect of this is that I think I tend to strain in the end when perhaps I could just coast over the finish line.

So with that in mind, I've decided to head out a bit slower and try to keep it calm, cool and collected and I think this is what ultimately made this particular run so easy.

I didn't push forward, I just pulled back when I was moving a bit fast or when I was establishing a stitch and then I'd just 'canter'. This means that I was able to notice a lot of things about my running behaviour.

For example, I slowed down and my feet where not being banged up and usually after an 11Km run they are pretty torn up with impact shock and sometimes blisters. I still think that this is partly to do with the rate I'm running at and perhaps coupled with my weight - I'm gaining weight.

Either way that combination is not great, so slowing down has fixed the 'thunder' in my feet that I've been having lately.  

Slowing down has allowed me to concentrate on listening to the pain, and adjusting. Usually, I ignore pain when I run 'fast' around say 4"30 or so. Its almost like you're focusing so much on pace, that you don't care about what your feet are feeling like or going through.

So with this slower style, I could find a pace that did not mash up my feet. This I think, is also a large reason why running past the usual threshold of 10-11Km was so easy and why I was not even aware of any discomfort.  What I was aware of was a distinct lack of discomfort in my feet, and perhaps I should just see how far they would take me. 

So the shoes are not the problem (i never thought they were, as I've been using them for years, at least this particular model). The other thing that might have played a part in it, was that I was listening to my favourite songs of the year. That helped. 

From a psychological point of view, I was not hard on myself because I'd already said to myself at the start of the run that I was going to go slower.

In the uphill stages or the more treacherous terrain where I could have struggled, I just said to myself 'calm down, you're running slower now so if you need to run any slower that's fine' and this approach worked.

Being ok with slowing down and perhaps speeding up is ok. Going one speed or the speed you predicted you should go at, takes it's toll, particularly psychologically to maintain and also physically and I think corroborating the feeling in your arms, legs, arms with appropriate adjustments, including the pace, makes the run smoother - this is a breakthrough really.

Running should not be turmoil or an obstacle. 

I was wary about wearing long sleeves though. I've got a good track record of enjoying running in short sleeves and this is because ultimately I warm up sufficiently, so I've resisted the need to change. And I think this is what is important, being ok to adapt, even below expectations, is what made this run better than all others this year - and certainly x2 as long as usual. 

The route took me back to my old workplace, around it and all the way back again. It was nice to be in familiar surroundings and I felt fine.

Look, I'd be lying if I said that I did feel uncomfortable at times, especially as the clocked ticked on over the hour mark, but I just slowed down and kept it in gear, steady and calm and ultimately this avoided disaster.

Looking back at the stats, in the end, 4"59 pace is mighty impressive because I did not think I'd be reaching anywhere near that. if anything I thought perhaps at my pace I would be trundling on at around 6 minutes per km. That shows that it wasn't that slow, and I was just running at where I was most comfortable at and that is variable throughout the route. 

I never look at my watch routinely when running.

I've always found that this helps to reduce the stress and strain on accommodating expectations - I think it also messed with your stride.

I don't think we should have expectations about who long we take, we should perhaps have expectations about how long we'll run for and then manage all the factors in the run to make it happen - if that means stopping, slowing down, taking a picture or taking a pee whatever - do it and then carry on running.

As it happens I needed a leak at about 16km in, so I pulled over into a footpath that was abandoned (I had a good nose about to ensure I'd not be interrupted).

This is the first time I've needed to take a whizz mid-run, but I stopped. I rationalize with myself, that it would be uncomfortable not to, and why stop this zen-like run by having to worry about that.

When it comes to my preparation in terms of what I ate - nothing special: I just had my usual porridge and a cup of decaf coffee.

I did, however, sleep until 12 pm. This was an important factor too perhaps. I was well-rested. 

In the end, the long sleeve top was almost unnoticeable and maybe it actually helped me stay comfortable and zen.

This is exactly how its supposed to be.

 

 

  • Great run
  • Analysis
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
Load more...
Blog RSS Feed