- Details
- Category: Blog
- By Stuart Mathews
- Hits: 4857
Since Time shifting, CS algorithms and Game Architecture, I've been exploring trivial set theory to gain insights into the theory of what defines a function from a purely mathematical standpoint, which as it happens, is the product of two sets. For example, let C represents a function in terms of a set \( C \subset A \times B \) such that it produces a set where \( \{(a,b): a \in A, b \in B\}\) and which references all the possible inputs and outputs of the function where \( C = A \mapsto B \).
Usually the nomenclature of a function is usually represented as \( F = X \mapsto Y\) and alternatively \( f(x) = y : x \in X \ y \in Y \). Obviously, if you spend some time going through the theory of functions as expressed on a graph such as a linear equation, the relationship between the dependant variable \(y\) and independent variable \(x\) has a clear relationship. In programming, this is generally interpreted as the running of a function's code as opposed to the evaluation of an equation or expression to realise the dependant variable's value.
The theory of sets is pretty interesting in its own right however its particularly interesting in the context of functional programming, particularly pure functions which are programming constructs which are trying to model the mathematical ideals of functions. A pure function represents a bijective mapping (both injective and surjective) of the product of the two sets (domain \(X\) and codomain \(Y\)). This means nothing else exists that influences \(y\) other than \(x\) (it's a direct mapping). This is a key consideration of pure functions in functional programming and immutability.
The reason its key is because this makes \(f(x)=y\) absolutely reliable because if you know \(x\), then you know \(y\). These ideas have implications for real-time processing environments like games because they usually rely on highly concurrent and often parallel execution of tasks, where reasoning about functions within a threading context requires predictability. This is among some of the research I'm doing at the moment.
Parallel executing tasks usually lock critical sections of shared state and these tasks (functions really) themselves are prone to producing non-deterministic results due to the very fact that they don't exhibit a bijective nature between their domain and codomains. Executing environments that rely on I/O and other external factors make it impossible to guarantee the result of any function that relies on it - not great for a function's predictability.
Making tasks pure, arguably makes them reliable and then you don't need locking either. My argument then, is that this will make games perform better - and more reliably. Though my research looks to determine at what cost this comes at, particularly when it comes to the code itself such as readability and maintainability etc.
Interestingly if you do some reading about Memorization, this relies on this behaviour of a bijective function too ie. you never need to evaluate that function again in the future - provided of course you store its input and output.
Apart from Mathematics, I've upgraded my Investment Management project to .net core 3.1 which takes it right up to the latest supported version. It was a bit of a pain doing it, as I had to go from .net core 2.0 -> 2.1 -> 2.2 -> 3.0 separately and then finally to 3.1. It wasn't that complicated but it wasn't seamless either.
I've also done some more Canal-side runs lately, which I've enjoyed.
Although I don't use functional programming for my day-to-day work in C# anymore, I've written a prototype game in C# which I'd like to 'functionalize'. I've come to appreciate the ideas and would like to explore its applicability to game development now. I'm slowly reading through Functional Programming in C# and I'm weaving it in with the other readings I have going on. I've got through the first 2 chapters so I've yet to make real headway. I produced a tutorial about functional programming in C# (which got good feedback) and this, I guess is an extension of that.
There is scope to incorporate Scala into my future work and there are functional aspects to Ruby (see later) - so the trajectory is in the same direction, generally. So I'll not be dropping it quite yet.
I've also been playing around with latex, trying to get it working on my blog(which I've managed to do) and I think moving forward I'd like to use it to write up some of my work moving forward.
I've managed to get out my project proposal which is entitled "Evaluating and Applying Functional Programming Paradigms in Developing Computer Games using C# and MonoGame", though I had to chase up my supervisor to review the draft and by the time he'd done it, my proposal had diverged significantly and thankfully his comments were largely already covered in my revised version. I incorporated some of his comments however which was helpful. I'll need to start working on the delivery of it.
I'm slowly making it through A mind for numbers having made it through the first 3 chapters. I'm also learning more about Linear Equations and relationships between numbers, specifically the dependant and independent variables that constitute formulae (given my earlier foray into functions) such as trivial straight line equations in the form of \(y = mx +c \). I find it quite interesting going through it. I'm finding it quite useful for reinterpreting some of the fundamental algebraic ideas and others such as understanding the nature of proportional relationships (the meaning of gradients for example).
I've pretty much stalled on applying my linear algebra learning to 3D transformations (which is the stuff I need to do next and finish...DirectX math, coursework and some films) as I think I've been reading so much about it that I'm a little tired of it (I've read the first 6 chapters of 3D Game Programming Using DirectX 10 and OpenGL back-to-back as well as Frank D Luna's book - lots and lots of theory).
As a distraction, I bought a book a while ago which is old (managed DirectX 9 in C#) which I thought might be useful to obtain an alternative perspective on viewing transformations and the like, so I started to read that. I've got through the first 3 chapters and now I'm at a point where viewing transformations are being discussed. So I've got 3 books ready to get into about this! This one's obviously old and managed direct X is no longer a product, but it's exciting none-the-less. I've got to avert my eyes when I see some of the device enumeration and selection code as some of it now does not apply to later versions of Direct X with the version I'm most familiar with (DX10) being the one that did away with the compatibility checking code that features in this book. It is however nice to be back in C#.
I created a nice little C++ vehicle that shows the current frames per second and I'll use this as the basis for my next work, which should be shortly. It will probably also include some HLSL and shader programming as that's required for the vertex transformations to occur.
I've also been learning Ruby and Ruby-On-Rails recently, not in a massive way yet. I've read the basic tutorial on the main website, followed 2 video tutorials on the subjects respectively, the first 4 chapters of "Eloquent Ruby" as well as the section on ruby in one of my programming books and created my first dummy rails application. I find text-based tutorials much more effective than a video, but I guess it depends on your time and motivations. My plan so far is to integrate it into my Investment Management project, using Resque and Redis as the basis of a background job management system, which I've already partially set up.
I've got Halo Wars, Metro Last Light, Dragons dogma and Dantes Inferno currently sprawled out on my couch and I've not played any of them yet. This means I've been pretty productive. I have played Mass Effect 3 though, that is until I got to a stage where I kept on getting killed and it started to annoy me - I stopped playing... You got to make games adaptively progressive!
There is still much to do, and only 2 more weeks to do it in!
- Details
- Category: Blog
- By Stuart Mathews
- Hits: 5143
Since Convolution, Running and Finite State Machines, I've spent the last couple of days focused on reading lotsa different stuff.
I'm currently alternating between a few books: "Introduction to 3D Game Programming with DirectX10", which is by Frank D Luna. It's quite technical however, it's accurate and provides good perspective, particularly around the basic math such as vectors and Matrices.
I'm also reading Rautenbach's "3D Game programming Using DirectX10 and OpenGL", which is one of my favourites reads on the subject. I think it is primarily because it does the best coverage of 3D transformation I've ever read and its got great illustrations too.
With "Foundations and fundamentals concepts of mathematics", I'm taking a history lesson at the moment into the axiomatic, deductive method used throughout history in describing math. I've been reading about the development of the deductive method used by Euclid and Archimedes and some of the histories behind how varying degrees of assumptions including Axioms and Postulates and how they are used, how they were formed and it is pretty insightful. It's theoretical but/and pleasingly so.
On a more practical note, I'm now also reading "A programmers introduction to mathematics" where we're trying to implement a secret code sharing mechanism using polynomial theory, which is a little more practical. This book takes a little longer to read than the others but I'm quite interested in it.
Last in my round-robin of reads is "A mind for numbers", which is more inspirational than technical and sits squarely within the genre of some my previous 'popular research' reads such as "Mindset" and "Bounce" as well as "The Checklist". It reads more like a novel/story than a technical book.
Apart from physical the books, which are stacked up next to me on my coffee table, I've been wondering the passages of the internet too. I recently came across a BBC learning page about computer science fundamentals.
It introduces algorithms such as those used for searching including linear and binary search methods and sorting too using methods such as bubble sort, selection sort, merge sort etc.
Its a series largely aimed at children but it has real application (just like any math would) and I found it particularly interesting and fun, which perhaps further suggests that perhaps I've not yet grown up.
It also maybe suggests that some computer science fundamentals are a little more known to children that they have ever been before as it was surprisingly well documented and entertaining as it included videos, examples and was suitable 'gamified'. The only thing missing I found was actual code implementations...
I spent some time implementing some of them for fun as I wanted to test myself.
My first test was implementing Bubble Sort - a linear time, O(n) sorting algorithm, pretty inefficient and slow but it's got a very defined and coordinated setup which makes it easily modelled.
I used a do() iteration construct, as you'll be swapping at least once and I count the number of swaps done through each iteration or 'pass' running through the list of numbers. The process ends when there are no more swapping to be done, which forms part of my exit condition in my loop and then the list is sorted:
public static void BubbleSort(ref int[] input)
{
int swapped;
do
{
swapped = 0;
for (int i = 0; i < input.Length-1; i++)
{
if (input[i] > input[i + 1])
{
var temp = input[i];
input[i] = input[i + 1];
input[i + 1] = temp;
swapped++;
}
}
} while(swapped > 0);
}
It was actually pretty fun doing this, and can surely be improved upon - Next, I tried the insertion sort:
Insertion sort
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.
This is a little more difficult because you need to 'look' back to see if any prior elements, so it needs a nested loop. One loop for the initial pass-through of each of the elements and then for each element, scan the previous elements leading up to the element. Pretty inefficient, somthing in the region of say O(n^2) I'd imagine, but totally something that can be modelled in code:
public static void InsertionSort(ref int[] input)
{
for (int i = 0; i < input.Length; i++)
{
for (int j = 0; j < i; j++)
{
if (input[i] < input[j])
{
var save = input[j];
input[j] = input[i];
input[i] = save;
}
}
}
}
These all sorted the list in-place. I stopped there but I'm going to add a few in the future, probably the more complicated and more efficient ones as it's a pretty interesting and enlightening process, and helps me practise.
So in other news, I've finally completed my Network Security course, and I'm pretty much done with the Digital Signal Processing. Actually, the last piece of work I did on it was the course work(which ultimately replaced the exam) was where I needed to implement a Circular Buffer and then use it in creating audio effects in C++ using fmod, among other things.
One example of this was creating a delayed(or time-shifted) signal effect like a flanger...
This is what I ended up doing in C++:
One of the other things I had to do, was set up an occlusion effect whereby walking in front of an object would effectively cut off sound originating from the object.
I used FMods geometry system to construct a polygon that represented a wall and positioned that in front of my object. I then registered it with the fmod system. Pretty fun as we had to import a new mesh and render it in the OpenGL - which was the first time I'd used OpenGL.
Looking back at my DSP course, I actually enjoyed it, particularly the theory but also too the C++ and the work I did on the ringbuffer. We covered aspects including:
- Logarithms, Real and complex numbers
- Sampling and Quantization analogue continuous signals into digital
- Wave theory and properties of waves
- Signal correlation and the Fourier transform: Dot product, Autocorrelation, Cross-correlation, Cross-co-variance, the Correlation coefficient
- Frequency Analysis
- Filtering and convolution
I enjoyed learning about the Fourier transform the most, and I suspect in the future I'll be using it again...
Apart from my Network Security and DSP course, which has taken a lot of my spare time, I also managed to also complete the Game Architecture course.
As a result, I've written a game engine in C# and MonoGame and produced a representable prototype to represent it. The main concepts covered include:
- Resource Management Strategy
- Character Control and Decoupled Input
- Collision Detection
- Moving and animated game elements and frame-rate independence
- The configurable game world and data-driven approach
- Decoupled removal of game objects from collision detection
- Scoring System use of event listeners
- High score or alternative state load/save mechanism
- Start-screen with FSM
- Power-ups using event listeners and re-use of base classes for game objects
- NPC opponents demonstrating FSM
- Overall game-play and presentation
I have a bit of a demonstration of a PAC-man like game which features collision detection, primitive AI and animation. It's very basic and is really just a prototype (not a whole game) to demonstrate the underlying architectural designs and concepts in the game code:
I started with a simple concept of a GameObject which takes on a lot of the functionality and responsibility of typical game situations such as calculating bounding boxes, doing collision detection and common updating routines, drawing and initializing of components etc.
Apart from the actual coding which has taken a lot of my time, I've also spent a considerable amount of time distilling the design decisions of my game engine into a report, which really helped me take stock of all the code I'd been writing, specifically the architectural aspects.
There have been a A few runs during the stay-in phase which are notable.
Yesterday, I put in Mass Effect 3 into the XBox and played a little bit, enough to cure the Krogen of the Genofage but I died like 20 times before I was able to finish the mission. But the persistence paid off!
I also upgraded my Investment Tracker application to Angular 6 from 5.2 which wasn't that bad however I'm a bit stuck now as I can't upgrade beyond that as I've got a strong dependency on a component that won't upgrade further - which I'll need to look to remove in the future but not a big issue right now.
I'm thinking of doing another project that uses my new knowledge around signal processing to implement a sort of Strava-like application in Angular 8/9 but the jury is out on the schedule of that. I've got a few weeks break before things start going again!
- Details
- Category: Blog
- By Stuart Mathews
- Hits: 3039
Computational power is a fundamental aspect of encryption. It relies on many rounds of computation to produce cipher text (Lian et al, 2012, Putra et al, 2018). Furthermore, encryption relies on the computational difficulty of solving mathematical problems – which, while inherently solvable (determining factors of very large numbers), are not solved quickly enough using the current computational power of today – that is, unless a key is provided (Cesare, 2015) which means that “…our security depends on the speed of computers” (Naughton, 2019 ).
The computational requirements of encryption technology in general may actually reduce confidentiality and privacy. For example, in applications that do not possess sufficient computational resources such as RFID technology, weaker, less secure (and less computationally onerous) are often used instead. (Calmels et al, 2006). Its is likely that this will become problematic in the future when encryption algorithms require even more resources to compute.
In cryptography it is often assumed that cryptographic algorithms are purely mathematical objects (Putra et al, 2010), though as their research has shown, computation of encryption algorithms in hardware such as security chips, often and do can ‘leak’ information about the effects of the computation(power, time, sound, electro-magnetic waves and vibration), which has shown to lead to complete key discovery in AES through what is known as ‘Power analysis attacks’ (Putra et al, 2019). This has been shown to be true of quantum-based computing too (Scarani, 2014) which shows that hardware itself would need to be secured to prevent compromise.
While classic encryption relies on “conventional computational paradigms (ie. deterministic Turing machine”, other alternatives exist such as ‘light-based computing’ which has shown to overcome DES encryption in less time than conventional computing allows. (Sartakhti, 2018).
Even so, with the prediction of Moore’s law, which implies an increasing advancement of processing power(roughly doubling chip performance every 2 years) and the coming of the quantum age, which brings with it the ability of “unrestricted computational power”(Scarani, 2014) to solve these computationally-intensive problems (without a key) within practical reach (Cesare, 2015) - the reliance on purely computation (as a predictor of safe encryption) is starting to show its flaws.
The race is on to move to more secure algorithms - and standardisation and adoption are likely to be obstacles in the migration(Cesare, 2015). Other, perhaps less widely considered notions are who will actually ‘own’ any new cryptographic algorithm and the hardware, particularly those predicted in quantum cryptography and how long will it take for a standard to be agreed?
The “Stanford patents” (which includes Diffie-Helman) and the DES and RSA patents expired in 1997, 1993 and 2000 respectively (Vetter, 2010), which in later years has allowed greater freedom to use without risk of litigation, however has also increased our reliance on them. Furthermore, an interesting question is, will it be expensive and/or exclusive to encrypt data in the short term with newly patented algorithms, requiring one to obtain licenses such as those required to use RSA in the years prior to 1997? (vetter, 2010)
While it appears that encryption’s dependencies on time and computational power are directly related, another worrying scenario are the effects of the future, particularly that which was raised by the Dutch General Intelligence and Security Services : that todays confidential information, when revisited in the future (when such computational restrictions may well be easily achievable ) will expose and unhide all the secrets of yesterday, leading to what some call ‘intercept now, decrypt later’ (Cesare, 2015).
One problem that seem to be resilient, independent of the algorithm technology and the computational(or non-computation) mechanism involved, is the need for network administrators to be able to reason/administer and monitor encrypted traffic, and while encryption standards are open, royalty-free(Vetter, 2010) mechanism exist to legitimately classify and even detect types of applications and protocols in the underlying encrypted traffic (Velan et al, 2015 ) – something that will be increasingly difficult without new encryption standards.
Another problem which plagues anything new, be that new quantum-based cryptography machines, algorithms or much of anything to do with computing – is what time reveals and the future conceals and what US Defence Secretary Donald Rumsfeld eloquently phased “unknown unknowns”.
Quantum cryptography has known issues (citation) as does existing encryption algorithms like DES(Kolata, 1983), and so too will the current state of the art - all are vulnerable to the future and it appears that its just a matter of time before something new and unexpected reveals the next vulnerability. The question and the answer - is how fast can we react?
References
Naughton, J., 2019. We’re still a long way from making a quantum leap in web code-breaking | John Naughton. The Guardian.
Scarani, V. and Kurtsiefer, C. (2014) ‘The black paper of quantum cryptography: Real implementation problems’, Theoretical Computer Science, 560(1), pp. 27–32. doi: 10.1016/j.tcs.2014.09.015.
Greg Vetter (2010) ‘PATENTING CRYPTOGRAPHIC TECHNOLOGY’, Chicago-Kent Law Review, 84, pp. 757–1027.
Salimi Sartakhti, J. and Jalili, S. (2019) ‘On the computational power of the light: A plan for breaking data encryption standard’, Theoretical Computer Science. Elsevier B.V, 773, pp. 71–78. doi: 10.1016/j.tcs.2018.08.015.
Cesare, C. (2015) ‘Encryption faces quantum foe: researchers urge readiness against attacks from future-generation computers.’, Nature. Nature Publishing Group, 525(7568), pp. 167–168.
Kolata, G. (1983) ‘Flaws found in popular code’, Science (New York, N.Y.), 219(4583), pp. 369–370. doi: 10.1126/science.219.4583.369.
Putra, S. D. et al. (2019) ‘Power analysis attack against encryption devices: a comprehensive analysis of AES, DES, and BC3’, TELKOMNIKA (Telecommunication Computing Electronics and Control), 17(3), pp. 1282–1289. doi: 10.12928/telkomnika.v17i3.9384.
Calmels, B. et al. (2006) ‘Low-Cost Cryptography for Privacy in RFID Systems’, in Smart Card Research and Advanced Applications: 7th IFIP WG 8.8/11.2 International Conference, CARDIS 2006, Tarragona, Spain, April 19-21, 2006. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 237–251. doi: 10.1007/11733447_17.
Velan, P. et al. (2015) ‘A survey of methods for encrypted traffic classification and analysis’, International Journal of Network Management, 25(5), pp. 355–374. doi: 10.1002/nem.1901.
Kapoor, B., Pandya, P. and Sherif, J. S. (2011) ‘Cryptography’, Kybernetes. Emerald Group Publishing Limited, 40(9/10), pp. 1422–1439. doi: 10.1108/03684921111169468.
Lian, J.H. & Chen, K. 2011, "Implementation of DES Encryption Algorithm Based on FPGA and Performance Analysis", Applied Mechanics and Materials, vol. 130-134, pp. 2953-2956.
More Articles …
- The Fourier transform, math, malware and decoupling
- Common network attacks and the shortcomings of standard network defences
- Encryption and network protocols
- BYOD and Implications for Network Security
- Mazer Game Design and Network Security
- Thoughts on Creativity And Originality
- Game Dev, Forensics, Math and stuff
- Fortune favours the brave
- Changing object states and 3D transformations
- Autopsy, a crash and some DirectX10
Page 20 of 182