Since Time shifting, CS algorithms and Game Architecture, I've been exploring trivial set theory to gain insights into the theory of what defines a function from a purely mathematical standpoint, which as it happens, is the product of two sets. For example, let C represents a function in terms of a set \( C \subset A \times B \) such that it produces a set where \( \{(a,b): a \in A, b \in B\}\) and which references all the possible inputs and outputs of the function where \( C = A \mapsto B \).
Usually the nomenclature of a function is usually represented as \( F = X \mapsto Y\) and alternatively \( f(x) = y : x \in X \ y \in Y \). Obviously, if you spend some time going through the theory of functions as expressed on a graph such as a linear equation, the relationship between the dependant variable \(y\) and independent variable \(x\) has a clear relationship. In programming, this is generally interpreted as the running of a function's code as opposed to the evaluation of an equation or expression to realise the dependant variable's value.
The theory of sets is pretty interesting in its own right however its particularly interesting in the context of functional programming, particularly pure functions which are programming constructs which are trying to model the mathematical ideals of functions. A pure function represents a bijective mapping (both injective and surjective) of the product of the two sets (domain \(X\) and codomain \(Y\)). This means nothing else exists that influences \(y\) other than \(x\) (it's a direct mapping). This is a key consideration of pure functions in functional programming and immutability.
The reason its key is because this makes \(f(x)=y\) absolutely reliable because if you know \(x\), then you know \(y\). These ideas have implications for real-time processing environments like games because they usually rely on highly concurrent and often parallel execution of tasks, where reasoning about functions within a threading context requires predictability. This is among some of the research I'm doing at the moment.
Parallel executing tasks usually lock critical sections of shared state and these tasks (functions really) themselves are prone to producing non-deterministic results due to the very fact that they don't exhibit a bijective nature between their domain and codomains. Executing environments that rely on I/O and other external factors make it impossible to guarantee the result of any function that relies on it - not great for a function's predictability.
Making tasks pure, arguably makes them reliable and then you don't need locking either. My argument then, is that this will make games perform better - and more reliably. Though my research looks to determine at what cost this comes at, particularly when it comes to the code itself such as readability and maintainability etc.
Interestingly if you do some reading about Memorization, this relies on this behaviour of a bijective function too ie. you never need to evaluate that function again in the future - provided of course you store its input and output.
Apart from Mathematics, I've upgraded my Investment Management project to .net core 3.1 which takes it right up to the latest supported version. It was a bit of a pain doing it, as I had to go from .net core 2.0 -> 2.1 -> 2.2 -> 3.0 separately and then finally to 3.1. It wasn't that complicated but it wasn't seamless either.
I've also done some more Canal-side runs lately, which I've enjoyed.
Although I don't use functional programming for my day-to-day work in C# anymore, I've written a prototype game in C# which I'd like to 'functionalize'. I've come to appreciate the ideas and would like to explore its applicability to game development now. I'm slowly reading through Functional Programming in C# and I'm weaving it in with the other readings I have going on. I've got through the first 2 chapters so I've yet to make real headway. I produced a tutorial about functional programming in C# (which got good feedback) and this, I guess is an extension of that.
There is scope to incorporate Scala into my future work and there are functional aspects to Ruby (see later) - so the trajectory is in the same direction, generally. So I'll not be dropping it quite yet.
I've also been playing around with latex, trying to get it working on my blog(which I've managed to do) and I think moving forward I'd like to use it to write up some of my work moving forward.
I've managed to get out my project proposal which is entitled "Evaluating and Applying Functional Programming Paradigms in Developing Computer Games using C# and MonoGame", though I had to chase up my supervisor to review the draft and by the time he'd done it, my proposal had diverged significantly and thankfully his comments were largely already covered in my revised version. I incorporated some of his comments however which was helpful. I'll need to start working on the delivery of it.
I'm slowly making it through A mind for numbers having made it through the first 3 chapters. I'm also learning more about Linear Equations and relationships between numbers, specifically the dependant and independent variables that constitute formulae (given my earlier foray into functions) such as trivial straight line equations in the form of \(y = mx +c \). I find it quite interesting going through it. I'm finding it quite useful for reinterpreting some of the fundamental algebraic ideas and others such as understanding the nature of proportional relationships (the meaning of gradients for example).
I've pretty much stalled on applying my linear algebra learning to 3D transformations (which is the stuff I need to do next and finish...DirectX math, coursework and some films) as I think I've been reading so much about it that I'm a little tired of it (I've read the first 6 chapters of 3D Game Programming Using DirectX 10 and OpenGL back-to-back as well as Frank D Luna's book - lots and lots of theory).
As a distraction, I bought a book a while ago which is old (managed DirectX 9 in C#) which I thought might be useful to obtain an alternative perspective on viewing transformations and the like, so I started to read that. I've got through the first 3 chapters and now I'm at a point where viewing transformations are being discussed. So I've got 3 books ready to get into about this! This one's obviously old and managed direct X is no longer a product, but it's exciting none-the-less. I've got to avert my eyes when I see some of the device enumeration and selection code as some of it now does not apply to later versions of Direct X with the version I'm most familiar with (DX10) being the one that did away with the compatibility checking code that features in this book. It is however nice to be back in C#.
I created a nice little C++ vehicle that shows the current frames per second and I'll use this as the basis for my next work, which should be shortly. It will probably also include some HLSL and shader programming as that's required for the vertex transformations to occur.
I've also been learning Ruby and Ruby-On-Rails recently, not in a massive way yet. I've read the basic tutorial on the main website, followed 2 video tutorials on the subjects respectively, the first 4 chapters of "Eloquent Ruby" as well as the section on ruby in one of my programming books and created my first dummy rails application. I find text-based tutorials much more effective than a video, but I guess it depends on your time and motivations. My plan so far is to integrate it into my Investment Management project, using Resque and Redis as the basis of a background job management system, which I've already partially set up.
I've got Halo Wars, Metro Last Light, Dragons dogma and Dantes Inferno currently sprawled out on my couch and I've not played any of them yet. This means I've been pretty productive. I have played Mass Effect 3 though, that is until I got to a stage where I kept on getting killed and it started to annoy me - I stopped playing... You got to make games adaptively progressive!
There is still much to do, and only 2 more weeks to do it in!