Sidebar Menu

Projects

  • Dashboard
  • Research Project
  • Milestones
  • Repository
  • Tasks
  • Time Tracking
  • Designs
  • Forum
  • Users
  • Activities

Login

  • Login
  • Webmail
  • Admin
  • Downloads
  • Research

Twitter

Tweets by stumathews
Stuart Mathews
  • Home
  • Blog
  • Code
  • Running
  • Gaming
  • Research
  • About
    • Portfolio
    • Info
Details
Category: Blog
By Stuart Mathews
Stuart Mathews
23.Apr
23 April 2021
Last Updated: 26 April 2021
Hits: 12379

ISO27001, Machine-Learning and Game dev

Since Implementing a Vignette, I've been pretty involved in getting various machine learning algorithms to, well learn...

I've been playing around with SIFT feature extractors, Histogram of gradients (HOG) and convolutional neural networks (CNNs) and at times it's been quite interesting.

When I started learning about machine learning techniques, I wouldn't say machine learning was of immediate interest to me. I think ever since I did a course on Data management and Analysis, I kinda thought designing software was more my thing. Sure the graphs were cool though, and I like graphs but I think too much data fiddling well just becomes too much data fiddling. That said, I did not learn machine learning and the closest I got to classification was K-nearest neighbour clustering techniques.

With machine learning, the mathematics however is quite interesting, especially the partial derivative calculations that help you determine the impact that model weights are having on the loss function of your model (backpropagation). I did have to write them out by hand initially because otherwise I just would not understand it.

After you understand this, you start to understand that parts of machine learning are very much a brute force, nudge-it-until-its-correct sort of discipline - which is effective but this is an over-simplification of course, and there are more smarts involved.

What is really quite impressive is that pytorch has a built-in Tensor type that will track how each tensor's value impacts an expression that involves that tensor - and calculating the impact of the tensor on say loss function is just a matter of calling backward() and the entire object hierarchy involved in the expression is evaluated for its impact on the expression. Quite cool. This helps not having to worry about trying to calculate the chain rule manually on a piece of paper! Also, found it pretty cool how easy it was to move Tensors to the GPU to speed up training times.

With this, I've pretty much swapped Ruby for Python which has become almost an extension of me lately (same goes for C++ but more on that later). Interestingly while designing my feature pipeline, I found that python has no concept of private members and the convention is just to use double underscore in front of the method name. Abstract classes exist which was useful, as I designed a pipeline (which is basically this) that is based on interfaces that allow uniform interaction but allowing varying underlying implementation details. 

My pipeline currently consists of 2 classifiers (SVM - Support Vector Machines and MLP - Multi-Layer Perceptron) and 2 feature extractors (SIFT and HOG) and one convolutional neural network (CNN) based on MobileNetV2.

The CNN I originally designed from scratch needed too much training than I had time to do and so the learning rate was poor. So I've been fine-tuning this one using pre-trained weights and I've just adapted it to learn the classes that I'm interested in.

I will say that I particularly enjoyed the learning around machine learning theory, for example understanding why a non-linear function (Sigmoid or ReLu for example) is used after linearly combining the input values, which is to produce a variance in the shape of the function which contributes to determining a function that best describes the input.

To this extent, this book was particularly useful in understanding 'why' and not just doing it and moving on which is so often the case with technical theory - not that this book was technical it was more practical and provided diagrams like this one - which my brain seems to appreciate. I wouldn't say I'm proficient however but I'm interested which was more than I could say before. 

Pity I don't have an NVidia graphics card so I've been having to use the GPU in google colab and I hit the limit a few times while training but eventually got 83% validation accuracy which is pretty good. 

Apart from that I've also been writing a lot of C++/OpenGL and finished a demo racing game, which I very much enjoyed programming. Its very simple but shows various important 3D graphical elements.

I've implemented an exponential fog effect and my scene is basically themed on Jurrasic park-styled atmosphere.

I've incorporated some meshes for the player car, the forest and the track. The path through the scene is calculated using catmul rom splines and the rear-view mirror is programed using a FrameBuffer object.

Most of the shader code is for the lighting effects and the fog. For the lighting, the Phong-Blinn model is used. Its been very interesting managing the vertex buffers and drawing 3D primitives etc. 

What I'd like to do next is incorporate the library of code that I developed for the demo into a more abstract utility that I can use in the next thing I do. 

In terms of Information Security, I wrote a critical review about securing software development processes as recommended by Clause 14 in ISO 27001 around the utility of implementing E-SecSDM, an engineering-focused software development model to improve the security of engineering systems that incorporate software in their design.

I think the learning I did on digital forensics, criminal law and network security previously was a bit more exciting than learning about ISO 27001/2 but like most things, its useful to know a little more than you did before so in this way, its useful.

I've been out and about running also - that knee niggle has seemed to have sorted itself out (well, I did implement a no-run policy for about 2 months) however my fitness has dropped off but that's ok - I've been slowly working my way back up. My last couple of runs were slow but they were pretty nice especially now as the sun is starting to come out. 

Speaking of which, maybe I should go for a run. now....

 

 

  • Game development
  • C++
  • Computer Vision
  • Computer Graphics
  • OpenGL
Details
Category: Code
By Stuart Mathews
Stuart Mathews
26.Jan
26 January 2021
Last Updated: 05 February 2021
Hits: 5683

Implementing a Vignette

I recently had a task to create a vignette of a picture. This is a technique in Computer Vision or digital signal processing whereby as you move closer and closer towards the centre of the image, the pixel intensity increases:  

When first approaching this problem, I could not understand how, from a dense matrix of colour information, you could determine how far the pixel that you were processing was from the centre of the image. I confirmed, that no position information is available within the pixel itself (it just contains raw colour information). Finally, it dawned on me that you could use the coordinates of the image matrix to create a vector to represent the centre point, ie. the offset x and y from an origin.

But where is the origin?

I first thought I'd have to use the centre of the image as the origin, meaning all my pixels co-ordinates would need to be relative to that, which would have been a pain as I'd have to work out how, say each pixel in the matrix[x][y] would probably need to be represented differently when relative to not matrix[0][0] but the centre of the image!

Then I realised that it could keep the origin as [0][0] in the matrix ie image[0,0] for both the centre point and each respective pixel and then it could thus be represented by a vector displacement from that same origin. This was a breakthrough for me. Not only that, you could then generate a new vector for each pixel this way - all using [0,0] in the image matrix to represent a distance of that pixel from the same origin. 

So, now you have two 2D vectors from the same origin, one that points at the centre of the image ie [max_cols/2, max_rows/2] and you have a vector that is [x,y] for each pixel you are currently processing. You can now subtract the vector representing the centre point from the pixel vector you are currently at, ie this would result in the vector between the two, which if you can calculate the magnitude thereof, will be the distance between the pixel you are on and the centre of the screen - ie its the hypotenuse between the two sides (of the two vectors). 

The length of the resulting vector can be easily by passing in the vector to np.linalg.norm() - ie get the norm of the vector ie the (length or magnitude) and this the distance. I guess you could also do this yourself by squaring the components of the vectors, adding them up together and taking the square root. But this is much easier.

Now you can use that distance to drive the intensity value at that pixel!

With the distance(d), you can derive the relative intensity using a function of d and max length of the image to give the intensity i.e brightness(d,img_max) to assign for that distance from the centre. That function produces e raised to the power of the ratio of distance to width. This equation was given and I've represented as the brightness function in the python code below:

\[ f(x,y) = e^{(\frac{-x}{y})}
= \begin{cases}
& \text{x is distance from center} \\
& \text{y is the width of image }
\end{cases} \]

As the image was already in 24-bit RGB colour, I converted it to HSV so that I could manipulate the V component, which corresponds to the intensity. This is not immediately and easily determinable from the RGB data.

I could then manipulate the V component by doing a simple 1d vector x 1d vector multiplication to create a mask that multiplies only the V component with the brightness, leaving the Hue and Saturation intact (I multiplied those by 1(identity)) 

This can be more concisely be represented in this Python script I wrote: 

# Change intensity of pixel colour, depending on the distance the pixel is from the centre of the image

from skimage import data
import numpy as np
import matplotlib.pyplot as plot
from skimage import color, img_as_float, img_as_ubyte
import math

# We can use the built in image of Chelsea the cat
cat = data.chelsea()

# Convert to HSV to be able to manipulate the image intensity(v)
cat_vig = color.rgb2hsv(cat.copy())

print(f'Data type of cat is: {cat_vig.dtype}')
print(f'Shape of cat is: {cat_vig.shape}')

# Get the dimension of the matrix (n-dimensional array of colour information)
[r, c, depth] = cat_vig.shape
v_center = [c / 2, r / 2, 0]

# Derive the pixel intensity from the distance from center
def brightness(radius, image_width):
    return math.exp(-radius / image_width)


# Go through each pixel and calculate its distance from center
# feed this into the brightness function
# modify the intensity component (v) of [h,s,v] for that pixel
def version1(rows, cols, rgb_img, v_center):
    for y in range(rows):
        for x in range(cols):
            me = np.array([x, y, 0])
            dist = np.linalg.norm(v_center - me)
            # alternative:
            cat_vig[y][x] *= [1, 1, brightness(dist, cols)]
            # cat_vign[y][x][2] *= brightness(dist, cols)

# do it
version1(r,c, cat_vig, v_center)

# Convert back to RGB so we can show in imshow()
cat_vig = color.hsv2rgb(cat_vig)
fig, ax = plot.subplots(1, 2)
ax[0].imshow(cat)  # Original version
ax[1].imshow(cat_vig)  # vignette version
plot.show()

This was pretty interesting, and I do love it when theoretical math ie linear algebra applies to practical outcomes!

Q.E.D:

aggressive lick pic.twitter.com/40fZHFY2w1

— Animal Life (@animalIife) January 25, 2021
  • Python
  • Computer Vision
  • Computer Graphics
Details
Category: Running
By Stuart Mathews
Stuart Mathews
04.Dec
04 December 2020
Last Updated: 04 December 2020
Hits: 4325

Fixing the thunder in my feet

I've just come back from a 21km run and it was fantastic and I'm just going to reflect on what worked and how it went to perhaps explore why.

I headed out just after half-past one in a long sleeve top. It was pretty cold outside. The original goal was to run 5km up the hill and onwards a little bit before turning around.

I've been thinking on a couple of my last runs that I should probably slow down a bit more and try to maintain a consistent, albeit slower pace throughout my run. This analysis was done on the back of the last couple of 10Km runs I'd done where I'd noticed that I've got a tendency to speed up in in beginning and also at the end of the run. 

The net effect of this is that I think I tend to strain in the end when perhaps I could just coast over the finish line.

So with that in mind, I've decided to head out a bit slower and try to keep it calm, cool and collected and I think this is what ultimately made this particular run so easy.

I didn't push forward, I just pulled back when I was moving a bit fast or when I was establishing a stitch and then I'd just 'canter'. This means that I was able to notice a lot of things about my running behaviour.

For example, I slowed down and my feet where not being banged up and usually after an 11Km run they are pretty torn up with impact shock and sometimes blisters. I still think that this is partly to do with the rate I'm running at and perhaps coupled with my weight - I'm gaining weight.

Either way that combination is not great, so slowing down has fixed the 'thunder' in my feet that I've been having lately.  

Slowing down has allowed me to concentrate on listening to the pain, and adjusting. Usually, I ignore pain when I run 'fast' around say 4"30 or so. Its almost like you're focusing so much on pace, that you don't care about what your feet are feeling like or going through.

So with this slower style, I could find a pace that did not mash up my feet. This I think, is also a large reason why running past the usual threshold of 10-11Km was so easy and why I was not even aware of any discomfort.  What I was aware of was a distinct lack of discomfort in my feet, and perhaps I should just see how far they would take me. 

So the shoes are not the problem (i never thought they were, as I've been using them for years, at least this particular model). The other thing that might have played a part in it, was that I was listening to my favourite songs of the year. That helped. 

From a psychological point of view, I was not hard on myself because I'd already said to myself at the start of the run that I was going to go slower.

In the uphill stages or the more treacherous terrain where I could have struggled, I just said to myself 'calm down, you're running slower now so if you need to run any slower that's fine' and this approach worked.

Being ok with slowing down and perhaps speeding up is ok. Going one speed or the speed you predicted you should go at, takes it's toll, particularly psychologically to maintain and also physically and I think corroborating the feeling in your arms, legs, arms with appropriate adjustments, including the pace, makes the run smoother - this is a breakthrough really.

Running should not be turmoil or an obstacle. 

I was wary about wearing long sleeves though. I've got a good track record of enjoying running in short sleeves and this is because ultimately I warm up sufficiently, so I've resisted the need to change. And I think this is what is important, being ok to adapt, even below expectations, is what made this run better than all others this year - and certainly x2 as long as usual. 

The route took me back to my old workplace, around it and all the way back again. It was nice to be in familiar surroundings and I felt fine.

Look, I'd be lying if I said that I did feel uncomfortable at times, especially as the clocked ticked on over the hour mark, but I just slowed down and kept it in gear, steady and calm and ultimately this avoided disaster.

Looking back at the stats, in the end, 4"59 pace is mighty impressive because I did not think I'd be reaching anywhere near that. if anything I thought perhaps at my pace I would be trundling on at around 6 minutes per km. That shows that it wasn't that slow, and I was just running at where I was most comfortable at and that is variable throughout the route. 

I never look at my watch routinely when running.

I've always found that this helps to reduce the stress and strain on accommodating expectations - I think it also messed with your stride.

I don't think we should have expectations about who long we take, we should perhaps have expectations about how long we'll run for and then manage all the factors in the run to make it happen - if that means stopping, slowing down, taking a picture or taking a pee whatever - do it and then carry on running.

As it happens I needed a leak at about 16km in, so I pulled over into a footpath that was abandoned (I had a good nose about to ensure I'd not be interrupted).

This is the first time I've needed to take a whizz mid-run, but I stopped. I rationalize with myself, that it would be uncomfortable not to, and why stop this zen-like run by having to worry about that.

When it comes to my preparation in terms of what I ate - nothing special: I just had my usual porridge and a cup of decaf coffee.

I did, however, sleep until 12 pm. This was an important factor too perhaps. I was well-rested. 

In the end, the long sleeve top was almost unnoticeable and maybe it actually helped me stay comfortable and zen.

This is exactly how its supposed to be.

 

 

  • Great run
  • Analysis
Details
Category: Blog
By Stuart Mathews
Stuart Mathews
02.Dec
02 December 2020
Last Updated: 02 December 2020
Hits: 4614

Sufficiently Complex

I think that change is a wonderful thing - and it is inspirational - you have the opportunity to put into action that which you have had a chance to evaluate... Sure, you can evaluate constantly but none is more effective than that done right before of some major required action. That being said, it is not always the case though, and change can be seen as an opportunity or failure.

The SAS for example, suggest that inducing unexpected circumstances in military training best confirms the effectiveness of the application of learning. I would probably agree with that and suggest that it would probably boost confidence in soldiers also.

Opportunity and failure: I believe which one depends not only on your mindset but also how persistent you are. The latter being a function I think of the practice, particularly of endurance generally, but perhaps specifically in overcoming obstacles consistently and dealing with new situations as they occur. I mean that's life pretty much, isn't it?

I think programmers deal with new and unexpected situations all the time - we rationalise, evaluate, apply and move on without any hoo-ha. In this way, I think we consume a lot of uncertainty. The next problem, however, is always more interesting.

Interestingly I think a lot of time is spent not on solving the problems but choosing the solutions that solve the problem for the longest time...as change is inevitable in software engineering. I have come to realise that trying to find the best solution is impossible, there are not enough knowns in any software project and it comes down to pretty much civil law: a balance of probabilities that the solution is effective given what we know.

It's not easy to remember past events, specifically the influences, feelings and effects that events had without revisiting them, which I think is crucial for well-being and confidence. Exercising and thinking about things helps to plan what to do next.

Voltaire once said: "No problem can withstand the assault of sustained thinking", but I'm paraphrasing because he was french - and would probably have said something like : "Aucun problème ne peut résister à l'assaut de pensées résolues."

For example, earlier this year I had to complete a three-part programming exercise online and it was daunting at first (this was as lockdown started, so tons of uncertainty), and I guess with any unexpected event, a degree of trepidation was felt for sure. After overcoming the difficulty of the exercise itself, not just the technical complexity of the scenarios, but also the psychological growth that occurred as a result, was the sense of self-determination...

I also participated in a live group-work session via skype to solve a much larger problem, using hand-drawn diagrams, evaluating requirements and establishing a strategy. I'm a whole lot more comfortable with this now and I guess that is what experience is: everything I've done in the past and have encountered has taught me something that I can use in the next round.

I've become a lot more confident for example because I realised while doing it, how much I really enjoy solving problems and modelling situations and the whole experience was actually a lot of fun. I ended up developing a parser for markdown in one scenario. The other scenarios required me implementing a tracking system by using virtual functions to invoke overriding behaviour in base classes and the other was to do with sorting.

Small things like the above, you often brush away as just being part of the work to be done, but in doing so you lose so much of the valuable tacit knowledge that is ultimately lost with time.

After this, I started a big project at work, research and the development, and learnt so much about distributed databases and sharding it's quite amazing. I'm well suited I think to work on large projects that are high in uncertainty. I also learnt a new language, (not french - Ruby) which I've quite enjoyed. See my discussion around Deadlocks and databases.

I also began a new course on Information Security which is pretty neat, I'm working in a more theoretical aspect in this course than I did in Network Security which, like Digital Forensics was highly technical. It is basically implementing ISO27001 by implementing an ISMS (Information Security Managment System), so I'm doing a lot of assessing of criteria, risks, controls, security requirements, policies etc... It's pretty interesting.

Besides that, recently I've been working on my 2D game engine which is currently undergoing a major refactor. I find if I can reason about a system within an A5 page in my notebook, then the system is sufficiently complex

I've got pretty much most of the basics in place, a resource manager that swaps in scene resources in and out of memory - that works ok, scene management is there a system of layer hierarchies to manage drawing using the painter's algorithm - zorder that is.

What I need now is to write a level editor to interface with the format of my resouces.xml file and scene{n}.xml file formats so I can create levels and co-ordinate placement of assets etc. I'll probably do this in C# using WPF. 

My current focus right now is on my event manager which is my baby right now. I built it from scratch using some primitive ideas but which has translated into a pretty robust system, handing the distribution of work throughout the subsystems. What I'm looking to do now is incorporate some tests to keep the system green through its lifetime and have just recently introduced some using the same framework I used when testing my circular buffer in my audio programming class. 

Apart from this, I've watched all the Alien movies, which are frankly quite awesome. I've also watched all the John Wick films too. I clocked up about 35+ Kms last week but my feet are all torn up and I've been hobbling around at times. I also performed a mini-surgery on my foot to strip away all the old skin after a pretty major blood blister - this was a major success.

The company I write software for bought me a pull-up machine for my apartment and offered to pay my gym membership which I think is pretty great, so during covid I've been pretty active: I can do about 10 pull-ups with relative ease now.

I wrote an article on  Design Patterns: Representation, Transmission and Dependencies which I found interesting (but I'm biased)  and released my architecture design in Mazer Game Architecture Report for my game architecture class.

I think the next 6 months are going to be even more intense and I'm gearing up for it!

You can't do everything but you can do something.

  • Programming
  • Game development
  • Concurrency
  • Philosophy
  • Software Engineering
Details
Category: Blog
By Stuart Mathews
Stuart Mathews
29.Jul
29 July 2020
Last Updated: 29 July 2020
Hits: 4433

Try Monad, Progress and Lockdown Rules

I've found working from home to be a lot more useful than perhaps I'd ever imagined it to be. 

The routine and flexibility have allowed me to more easily complete my deliverables in many cases (though I do tend to go to be later). For example, I am able to go for runs more routinely, and my days are uninterrupted which helps me concentrate and make productive advances in my work.

Due to these advances, I've very much considering this to be my default modus operandi moving forward and reducing my travel and potentially my rent in the future...

At the moment I'm learning about various design patterns and as part of my ongoing research and even made a foray into Lambda Calculus while reading a paper by Haskell author Paul Hudak written in 1989. I find reading papers to be quite enjoyable, and immersive - particularly as I'm already at home after work with no tiresome commute.

I've realised that I very much like doing investigative work, research or otherwise. Particularly true of debugging in general or any aspect where I need to paint a clear understanding from an otherwise unknown or blurry set of circumstances. I really enjoyed the Digital Forensics course I did for example.

I guess at the heart of it, this is a form of conceptual modelling - piecing together understanding about the unknown. And I think it can be quite a personal thing, and therefor quite rewarding - because you've got to figure out, in your own way how something works ie make it understandable. This can be a very creative process. I find drawing pictures in my notebook the best way to model things and ideas, I'm also quite partial to using my Surface too.

My running has made leaps and bounds, I've increased my fitness to a large extend while under lock-down, which perhaps compensates for the amount that I'm eating too!

I recently I wrote about some aspects of the testing framework in Ruby RSpec let and let! differences which came up during a discussion and have found the platform generally useful. 

I've also been re-writing a prototype game to use Functional programming paradigms and techniques to first show that it can be done, and to document what they are - for purely research basis.

As part of this and other ongoing learning, I've updated my LanguageExt tutorial (which had made it into an arctic vault), so will now be enjoyed for generations to come! It also happens to be its 1st Birthday this month. 

I've added a new use case for demonstrating the Try<T> monad, which is such a useful thing but is something somehow, particularly people I've spoken to about it, haven't appreciated as much.

I've also been prodding around in J2EE code around using RX-Rest, Glassfish, JSF, JPA and EJBs but not in a huge way these days. I'm looking for excuses now.

 

  • Java
  • REST
  • Ruby
Details
Category: Blog
By Stuart Mathews
Stuart Mathews
31.May
31 May 2020
Last Updated: 31 May 2020
Hits: 4897

Rails, Euclid and Generating Mazes

Since Set Theory, Ruby and Upgrades, I've been focusing on learning new technologies recently{xbref tag="0" title="Changed jobs" desctext="I now work at VMware" } [Changed jobs] 15{/xbref}. The primary focus of the new platform under development is on centralizing the view of your current separate clouds{xbref tag="0" title="Multi clouds" desctext="AWS, Azure, GCP(Google), Alibaba etc" } [Multi clouds] 15{/xbref} and managing them from single operational, governance, security and cost management perspective.

The platform is based on Ruby on Rails and it is a completely new and another level of learning which centres around a platform that provides the tools to develop online or cloud-based systems. There is a striking resemblance to ASP.net MVC and Spring MVC however the added twist of needing to use ruby makes it an interesting proposition, not to mention's provision for metaprogramming.

I've been in the thick of learning about RSpec, FactoryGirl and the base Rails platform documentation. I've also been watching a lot of Plural Sight videos which has been quite a time-consuming process however an interesting diversion at times. I'm not even nearly made a dent in the materials.

I recently decided to write a simple game called Ruby Mazer entirely in Ruby over the May bank holiday weekend. This was really fun and got me conversational with ruby. I used a gem called Gosu to help with the drawing and input. The premise of the game is that you need to reach a point in the autogenerated maze without touching the maze walls in getting there. Once you get to the exit point, you are teleported to a new maze which is slightly more difficult. I used Prims Algorithm to generate and solve the models that make up the level generation routines. You can read up on the implementation in Ruby Mazer.

I've had to dial back on my recent reads{xbref tag="0" title="Recent reads" desctext="DirectX and Maths" } [Recent reads] 15{/xbref} as priorities have changed and I'm re-focusing on other things{xbref tag="0" title="Re-focus" desctext="Ruby and Ruby and rails being one, and some of the onbording training being the other" } [Re-focus] 15{/xbref}, however as my learning curve goes down, I'll probably continue with them. This has also been the case with my running over the last two weeks.

Having said that I went for a run yesterday as described in Hot weather running. I just about survived. I also earned two new(3 actually) blisters for my efforts. I played my specifically created Spotify playlist. A particular good running tune is Micheal Jackson - The way you make me feel and Dance Monkey.

In other news, I've recently started focusing on Euclid. It's fairly slow going, however 'slow and steady wins the race'... It's quite nice to see how common rules like how the angles under at the base of an isosceles triangle are equal - is actually solved in an actual proof so as to show/explain/prove where/how this rationale came from{xbref tag="0" title="Proposition 5, Book 1" desctext="This is the actual proposition by Euclid" } [Proposition 5, Book 1] 15{/xbref}. 

The other thing I like about the proofs is that they build upon each other. Initially, if you look at the rationale of a proposition, particularly the starting diagram{xbref tag="0" title="Proposition 3, Book 1" desctext="Given two unequal straight lines, to cut off from the greater a straight line equal to the less" } [Proposition 3, Book 1] 15{/xbref} it's not entirely clear however that it proceeds from the previous Proposition{xbref tag="0" title="Proposition 2, Book 1" desctext="To place at a given point[as an extermity] a straight line equal to a given straight line" } [Proposition 2, Book 1] 15{/xbref}.  This is sometimes a pain however I do think the mental realization of connecting where the previous proposition left off and where this one derives from it perhaps is a necessary mental step. The first 7 triangle propositions have been an interesting diversion over the last couple of weeks.

I've also been heavily customizing my editor recently for my new workflow - see it here complete with split screens and git integration! 

Now, I must not forget to sleep and must run more. Sleep and run, sleep and run and program in the middle (7 hours) 

  • Running
  • Math
  • Digital Signal Processing and Audio Programming
  • Ruby
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
Load more...