Sidebar

Recently

  • Implementing a Vignette
  • Chequered Leaves and the Yellow Runner
  • Fixing the thunder in my feet
  • Sufficiently Complex
  • Mazer Game Architecture Report

Most recently read

Stuart Mathews
  • Star
  • Blog

    Popular in Blog

    • Design Patterns: Representation, Transmission and Dependencies
    • Deadlocks and databases
    • Enterprise System Integration
    • In the meanwhile
    • Try Monad, Progress and Lockdown Rules
    • Agile and Model Driven Development
    • Set Theory, Ruby and Upgrades
    • The value of life
    • A strategy for systematic thinking
    • A lot of doing a lot
  • Code

    Popular in Code

    • Implementing a Vignette
    • Ruby RSpec let and let! diffirences
    • Functional programming paradigms and techniques
    • Convolution, Running and Finite State Machines
    • Parameter type validation

    Featured in Code

    • LanguageExt Tutorial
    • DataflowPipeline.net
    • Github profile
    • Broker design pattern
    • Glib and Sql
    • Citrix Virtual channels
  • Running

    Popular in Running

    • Fixing the thunder in my feet
    • Chequered Leaves and the Yellow Runner
    • Faster than usual
    • The uncomfortable steady-state
    • A few runs during the stay-in phase
    • Hot weather running
    • Canal-side run
    • Running: Old digs
    • 20km
    • Fastest time this year

    Featured in Running

    • Smashrun
    • Strava profile
  • Gaming

    Popular in Gaming

    • Mazer Game Architecture Report
    • Pleasure And the Execution of Thoughfulness
    • Ruby Mazer
    • Some more Direct X10
    • Retro sounds, Event Manager and some running
    • Basic key frame animations, implicit casts and some other aspect
    • Protocols, Packets and Prototypes
    • Gaming
    • Measuring the duration of a function in C++
    • Animated aliens and alternatives algorithms
    • A little bit of graphics
    • 3d Graphics Pipelines
    • Introduction to 3d graphics.
    • Game Coding Complete 4
    • Game development
    • Game loop
    • Direct X theory and principles
    • SDL and stuff
    • A simple game engine architecture

    Featured in Gaming

    • Game loop
    • Featured
  • Portfolio

    Featured in Portfolio

    • App-V (Citrix)
    • Citrix Fast Connect SDK
    • AppDNA
    • Cloud Copy and Paste
    • Software Audit Pro
    • Investment Manager
    • Meal tracker
    • Cross platform broker
  • About

Implementing a Vignette

Details
Stuart Mathews
Code
26 January 2021
Created: 26 January 2021
Last Updated: 05 February 2021
Hits: 239

I recently had a task to create a vignette of a picture. This is a technique in Computer Vision or digital signal processing whereby as you move closer and closer towards the centre of the image, the pixel intensity increases:  

When first approaching this problem, I could not understand how, from a dense matrix of colour information, you could determine how far the pixel that you were processing was from the centre of the image. I confirmed, that no position information is available within the pixel itself (it just contains raw colour information). Finally, it dawned on me that you could use the coordinates of the image matrix to create a vector to represent the centre point, ie. the offset x and y from an origin.

But where is the origin?

I first thought I'd have to use the centre of the image as the origin, meaning all my pixels co-ordinates would need to be relative to that, which would have been a pain as I'd have to work out how, say each pixel in the matrix[x][y] would probably need to be represented differently when relative to not matrix[0][0] but the centre of the image!

Then I realised that it could keep the origin as [0][0] in the matrix ie image[0,0] for both the centre point and each respective pixel and then it could thus be represented by a vector displacement from that same origin. This was a breakthrough for me. Not only that, you could then generate a new vector for each pixel this way - all using [0,0] in the image matrix to represent a distance of that pixel from the same origin. 

So, now you have two 2D vectors from the same origin, one that points at the centre of the image ie [max_cols/2, max_rows/2] and you have a vector that is [x,y] for each pixel you are currently processing. You can now subtract the vector representing the centre point from the pixel vector you are currently at, ie this would result in the vector between the two, which if you can calculate the magnitude thereof, will be the distance between the pixel you are on and the centre of the screen - ie its the hypotenuse between the two sides (of the two vectors). 

The length of the resulting vector can be easily by passing in the vector to np.linalg.norm() - ie get the norm of the vector ie the (length or magnitude) and this the distance. I guess you could also do this yourself by squaring the components of the vectors, adding them up together and taking the square root. But this is much easier.

Now you can use that distance to drive the intensity value at that pixel!

With the distance(d), you can derive the relative intensity using a function of d and max length of the image to give the intensity i.e brightness(d,img_max) to assign for that distance from the centre. That function produces e raised to the power of the ratio of distance to width. This equation was given and I've represented as the brightness function in the python code below:

\[ f(x,y) = e^{(\frac{-x}{y})}
= \begin{cases}
& \text{x is distance from center} \\
& \text{y is the width of image }
\end{cases} \]

As the image was already in 24-bit RGB colour, I converted it to HSV so that I could manipulate the V component, which corresponds to the intensity. This is not immediately and easily determinable from the RGB data.

I could then manipulate the V component by doing a simple 1d vector x 1d vector multiplication to create a mask that multiplies only the V component with the brightness, leaving the Hue and Saturation intact (I multiplied those by 1(identity)) 

This can be more concisely be represented in this Python script I wrote: 

# Change intensity of pixel colour, depending on the distance the pixel is from the centre of the image

from skimage import data
import numpy as np
import matplotlib.pyplot as plot
from skimage import color, img_as_float, img_as_ubyte
import math

# We can use the built in image of Chelsea the cat
cat = data.chelsea()

# Convert to HSV to be able to manipulate the image intensity(v)
cat_vig = color.rgb2hsv(cat.copy())

print(f'Data type of cat is: {cat_vig.dtype}')
print(f'Shape of cat is: {cat_vig.shape}')

# Get the dimension of the matrix (n-dimensional array of colour information)
[r, c, depth] = cat_vig.shape
v_center = [c / 2, r / 2, 0]

# Derive the pixel intensity from the distance from center
def brightness(radius, image_width):
    return math.exp(-radius / image_width)


# Go through each pixel and calculate its distance from center
# feed this into the brightness function
# modify the intensity component (v) of [h,s,v] for that pixel
def version1(rows, cols, rgb_img, v_center):
    for y in range(rows):
        for x in range(cols):
            me = np.array([x, y, 0])
            dist = np.linalg.norm(v_center - me)
            # alternative:
            cat_vig[y][x] *= [1, 1, brightness(dist, cols)]
            # cat_vign[y][x][2] *= brightness(dist, cols)

# do it
version1(r,c, cat_vig, v_center)

# Convert back to RGB so we can show in imshow()
cat_vig = color.hsv2rgb(cat_vig)
fig, ax = plot.subplots(1, 2)
ax[0].imshow(cat)  # Original version
ax[1].imshow(cat_vig)  # vignette version
plot.show()

This was pretty interesting, and I do love it when theoretical math ie linear algebra applies to practical outcomes!

Q.E.D:

aggressive lick pic.twitter.com/40fZHFY2w1

— Animal Life (@animalIife) January 25, 2021
Python Computer Vision Computer Graphics
Write comment (0 Comments)

Fixing the thunder in my feet

Details
Stuart Mathews
Running
04 December 2020
Created: 04 December 2020
Last Updated: 04 December 2020
Hits: 372

I've just come back from a 21km run and it was fantastic and I'm just going to reflect on what worked and how it went to perhaps explore why.

I headed out just after half-past one in a long sleeve top. It was pretty cold outside. The original goal was to run 5km up the hill and onwards a little bit before turning around.

I've been thinking on a couple of my last runs that I should probably slow down a bit more and try to maintain a consistent, albeit slower pace throughout my run. This analysis was done on the back of the last couple of 10Km runs I'd done where I'd noticed that I've got a tendency to speed up in in beginning and also at the end of the run. 

The net effect of this is that I think I tend to strain in the end when perhaps I could just coast over the finish line.

So with that in mind, I've decided to head out a bit slower and try to keep it calm, cool and collected and I think this is what ultimately made this particular run so easy.

I didn't push forward, I just pulled back when I was moving a bit fast or when I was establishing a stitch and then I'd just 'canter'. This means that I was able to notice a lot of things about my running behaviour.

For example, I slowed down and my feet where not being banged up and usually after an 11Km run they are pretty torn up with impact shock and sometimes blisters. I still think that this is partly to do with the rate I'm running at and perhaps coupled with my weight - I'm gaining weight.

Either way that combination is not great, so slowing down has fixed the 'thunder' in my feet that I've been having lately.  

Slowing down has allowed me to concentrate on listening to the pain, and adjusting. Usually, I ignore pain when I run 'fast' around say 4"30 or so. Its almost like you're focusing so much on pace, that you don't care about what your feet are feeling like or going through.

So with this slower style, I could find a pace that did not mash up my feet. This I think, is also a large reason why running past the usual threshold of 10-11Km was so easy and why I was not even aware of any discomfort.  What I was aware of was a distinct lack of discomfort in my feet, and perhaps I should just see how far they would take me. 

So the shoes are not the problem (i never thought they were, as I've been using them for years, at least this particular model). The other thing that might have played a part in it, was that I was listening to my favourite songs of the year. That helped. 

From a psychological point of view, I was not hard on myself because I'd already said to myself at the start of the run that I was going to go slower.

In the uphill stages or the more treacherous terrain where I could have struggled, I just said to myself 'calm down, you're running slower now so if you need to run any slower that's fine' and this approach worked.

Being ok with slowing down and perhaps speeding up is ok. Going one speed or the speed you predicted you should go at, takes it's toll, particularly psychologically to maintain and also physically and I think corroborating the feeling in your arms, legs, arms with appropriate adjustments, including the pace, makes the run smoother - this is a breakthrough really.

Running should not be turmoil or an obstacle. 

I was wary about wearing long sleeves though. I've got a good track record of enjoying running in short sleeves and this is because ultimately I warm up sufficiently, so I've resisted the need to change. And I think this is what is important, being ok to adapt, even below expectations, is what made this run better than all others this year - and certainly x2 as long as usual. 

The route took me back to my old workplace, around it and all the way back again. It was nice to be in familiar surroundings and I felt fine.

Look, I'd be lying if I said that I did feel uncomfortable at times, especially as the clocked ticked on over the hour mark, but I just slowed down and kept it in gear, steady and calm and ultimately this avoided disaster.

Looking back at the stats, in the end, 4"59 pace is mighty impressive because I did not think I'd be reaching anywhere near that. if anything I thought perhaps at my pace I would be trundling on at around 6 minutes per km. That shows that it wasn't that slow, and I was just running at where I was most comfortable at and that is variable throughout the route. 

I never look at my watch routinely when running.

I've always found that this helps to reduce the stress and strain on accommodating expectations - I think it also messed with your stride.

I don't think we should have expectations about who long we take, we should perhaps have expectations about how long we'll run for and then manage all the factors in the run to make it happen - if that means stopping, slowing down, taking a picture or taking a pee whatever - do it and then carry on running.

As it happens I needed a leak at about 16km in, so I pulled over into a footpath that was abandoned (I had a good nose about to ensure I'd not be interrupted).

This is the first time I've needed to take a whizz mid-run, but I stopped. I rationalize with myself, that it would be uncomfortable not to, and why stop this zen-like run by having to worry about that.

When it comes to my preparation in terms of what I ate - nothing special: I just had my usual porridge and a cup of decaf coffee.

I did, however, sleep until 12 pm. This was an important factor too perhaps. I was well-rested. 

In the end, the long sleeve top was almost unnoticeable and maybe it actually helped me stay comfortable and zen.

This is exactly how its supposed to be.

 

 

Great run Analysis
Write comment (0 Comments)

Sufficiently Complex

Details
Stuart Mathews
Blog
02 December 2020
Created: 02 December 2020
Last Updated: 02 December 2020
Hits: 666

I think that change is a wonderful thing - and it is inspirational - you have the opportunity to put into action that which you have had a chance to evaluate... Sure, you can evaluate constantly but none is more effective than that done right before of some major required action. That being said, it is not always the case though, and change can be seen as an opportunity or failure.

The SAS for example, suggest that inducing unexpected circumstances in military training best confirms the effectiveness of the application of learning. I would probably agree with that and suggest that it would probably boost confidence in soldiers also.

Opportunity and failure: I believe which one depends not only on your mindset but also how persistent you are. The latter being a function I think of the practice, particularly of endurance generally, but perhaps specifically in overcoming obstacles consistently and dealing with new situations as they occur. I mean that's life pretty much, isn't it?

I think programmers deal with new and unexpected situations all the time - we rationalise, evaluate, apply and move on without any hoo-ha. In this way, I think we consume a lot of uncertainty. The next problem, however, is always more interesting.

Interestingly I think a lot of time is spent not on solving the problems but choosing the solutions that solve the problem for the longest time...as change is inevitable in software engineering. I have come to realise that trying to find the best solution is impossible, there are not enough knowns in any software project and it comes down to pretty much civil law: a balance of probabilities that the solution is effective given what we know.

It's not easy to remember past events, specifically the influences, feelings and effects that events had without revisiting them, which I think is crucial for well-being and confidence. Exercising and thinking about things helps to plan what to do next.

Voltaire once said: "No problem can withstand the assault of sustained thinking", but I'm paraphrasing because he was french - and would probably have said something like : "Aucun problème ne peut résister à l'assaut de pensées résolues."

For example, earlier this year I had to complete a three-part programming exercise online and it was daunting at first (this was as lockdown started, so tons of uncertainty), and I guess with any unexpected event, a degree of trepidation was felt for sure. After overcoming the difficulty of the exercise itself, not just the technical complexity of the scenarios, but also the psychological growth that occurred as a result, was the sense of self-determination...

I also participated in a live group-work session via skype to solve a much larger problem, using hand-drawn diagrams, evaluating requirements and establishing a strategy. I'm a whole lot more comfortable with this now and I guess that is what experience is: everything I've done in the past and have encountered has taught me something that I can use in the next round.

I've become a lot more confident for example because I realised while doing it, how much I really enjoy solving problems and modelling situations and the whole experience was actually a lot of fun. I ended up developing a parser for markdown in one scenario. The other scenarios required me implementing a tracking system by using virtual functions to invoke overriding behaviour in base classes and the other was to do with sorting.

Small things like the above, you often brush away as just being part of the work to be done, but in doing so you lose so much of the valuable tacit knowledge that is ultimately lost with time.

After this, I started a big project at work, research and the development, and learnt so much about distributed databases and sharding it's quite amazing. I'm well suited I think to work on large projects that are high in uncertainty. I also learnt a new language, (not french - Ruby) which I've quite enjoyed. See my discussion around Deadlocks and databases.

I also began a new course on Information Security which is pretty neat, I'm working in a more theoretical aspect in this course than I did in Network Security which, like Digital Forensics was highly technical. It is basically implementing ISO27001 by implementing an ISMS (Information Security Managment System), so I'm doing a lot of assessing of criteria, risks, controls, security requirements, policies etc... It's pretty interesting.

Besides that, recently I've been working on my 2D game engine which is currently undergoing a major refactor. I find if I can reason about a system within an A5 page in my notebook, then the system is sufficiently complex

I've got pretty much most of the basics in place, a resource manager that swaps in scene resources in and out of memory - that works ok, scene management is there a system of layer hierarchies to manage drawing using the painter's algorithm - zorder that is.