- Details
- Category: Code
- By Stuart Mathews
- Hits: 3424
I’ve been working on a project at the office recently which involves interoperating with an API written in WCF using message contracts.
Basically, the API manages the lifecycle of lists of securities in much the same way as one would have a watch list of eBay items. The project revolves around linking this API with another external helper API such that in normal operation the one would talk to the other and exchange some useful information in servicing a customer’s request – such as creating such a list. This is the integration I'm talking about. We’ve effectively become contractors to modify their API accordingly so as to make it talk to the 2nd external API(helper API). The API is a traditional on-prem service using Oracle, however, it's moving to AWS using RDS. Not that this affects me too much as most of that transition affects the lower level code and I’m much higher in the business logic.
Most of my work is behind the scenes and this should not affect the customer’s view of the service they’ve traditionally expected to work in a certain way. This requires that the API calls deliver the same types of messages back etc. ie. the interface should remain but the implementation will change to facilitate calling out additionally to a further helper API.
With all this being said, there are reams of considerations.
Firstly, that we don’t break the API such that customer’s existing apps break. To this end, the API does have some automated tests which should highlight these issues. That being said we still need to wire these tests into our src copy of the API. This will be useful. The other is that we can modify the code in such a way that what we’ve introduced doesn’t compromise the integrity of the existing codebase – i.e introduce nasty bugs that we have to fix down the line. This is more of an architectural issue about how we should structure our new code changes to minimize the impact of our code while still delivering the new code and introducing the new functionality. Ideally, the new code will be contained and fairly isolated within a new sub-project which will then be used throughout the API. We’ve called this the ‘service’ project and it contains all our integration code.
Initially, we’ve taken a moderately pragmatic starting approach – basically make the changes in any way we can that basically gets the basic ideas to work. This is a risk mitigation strategy so that we can quantify the amount of work to just do this, without taking into account any other considerations and merely proving the possibility that it can be done.
Once we’ve established that a pragmatic, naïve and architecture-less implementation solves the imminent tasks, then the risk shifts to making sure that we introduce a better architecture that is less ‘as-we-go’ and more reasoned. To this end, we need to ‘design’ our code into an architecture that mitigates against future bugs and reduces the surface area of code that we introduce, making changes less problematic and affecting less of the overall systems than absolutely necessary.
The less coupling our code with the existing codebase, the less the risk of a change we introduce changes that ripple throughout the system. But like most things, this is a journey: from naïve implementation to realising an architecture to contain it.
As part of introducing a way to reduce the amount of code we change, we also need a mechanism to test the code we’ve introduced. To reduce the amount of code that we change or at least its impact, we need to change the code in one place and ideally reference this one place throughout the other code as necessary. This is so that we don’t duplicate code in multiple places and what code we do use can be tested in isolation to the places that use it.
This is where the unit and integration tests come in. I'm a great proponent of unit testing - ever since setting seeing the benefits while designing my C API with integrated tests. Unit tests are great for ensuring assumptions about small units of code remain the same, irrespective of code changes elsewhere in the code. This helps ensure that by changing the seemingly unrelated code, doesn’t break the assumptions that the unit tests cover for other code.
Issues with unit tests for legacy systems is that existing systems aren’t usually designed to be testable, that is the codebase doesn’t make it easy to run code in isolation to the whole application being run. There are ways to mock out or fake and otherwise difficult subsystems that would usually require the application as a whole to already be up and running. Also, it can be more pain mocking these dependencies that is practical. However, we can introduce our new code such that we can unit test it. The difficult parts that we need to interoperate with can be covered through an integration test, which tests the system as a whole – basically black-box tests where you test the outcomes of running the systems and make sure that they output correct results.
Another useful thing to do is to try and use the API as a customer – this is the black-box type integration testing mentioned above. So calling the API and ensuring it does what is expected by the customer as well as the changes introduced are doing the right thing – such that the helper API is being called and being used correctly. To this end, we’d call the API and then also call the helper API so see that the interoperation took place and that the results are as expected. These types are changes are next to impossible with white box/unit testing.
A lot of this type of thinking is about risk mitigation, it's about increasing one's chances of success. Attempting to see into the future and trying to circumvent issues that might and probably will come up is far better than a wait-and-see approach, but inevitably it usually always starts out this way as I mentioned ealier. It also allows you to be more confident and professional about the work you do. This is why modern software development is as much to do with testing and planning and adaption to change than ever before. Probably why AGILE is so useful these days.
Anyway, I’ve started a unit test project and an external cmdline app to integration test the API from a customer’s point of view. I’ve also implemented an internal integration test suite that calls only our service project and allows it to call out to the helper API. So basically two types of integration tests are in place for this project: external customer-API-view and internal-view of our code and its effects on the helper API. The internal integration tests are great because we don't have to run the full API. There is nothing worse than being on a deadline and the tools you use are slowing you down more than the code itself!
As mentioned previously, to ensure that the code we introduce is not scattered all over the show, I’ve ensured that the new code resides in a ‘service’ project' where our code is the service being offered to the API, that code is responsible for calling the helper API.
A good thing about putting all our code in this project is that we a unit test it and we can re-use it throughout the project. Changes in the service project should uniformly change in other areas that use the service project - replication not duplication. Another great thing about integration tests is that while they exercise the internal service project code, they don't have to have the main API running as mentioned before, particularly if you’re just connecting to the helper API. Otherwise, code change would require and re-deploy each time(rebuild an run) and that takes time. Another benefit of isolating any code(all out code) introduced via the services project is that we can add timing functions to just the services project to see how long it takes to run the certain code. Integration tests that test the API from the customer’s point of view can time how long the calls take now in contrast to how long they took before without the code.
One thing that is useful is being able to turn off the new functionality without affecting the product, such that the product doesn’t need the new functionality. This kill-switch design should be implemented within the architecture of the newly introduced code i.e. the service project. So theoretically you can just turn off the new functionality and the same level of service that previously was in place and expected by the API resumes as per normal. This obviously requires a design consideration.
So in a nutshell, my integration work has revolved around:
- Architecture for newly introduced code (isolation, testability, low coupling etc)
- Service project
- Kill-switch functionality
- External Integration tests – customer expectations are met and that the API still works as before. (Cmdline)
- Internal integration tests – run our code without having to run the main API while still talking to the new helper API (unit test project)
- Timing functions – make sure we’re not slowing things down, we can quantify code changes in terms of their performance.
The other thing we’re doing is introducing a mechanism whereby the helper API, if its used independent of the main API, then that the main API gets notified of this usage. To this end, we’re putting together a AWS lambda function that will be pushed notifications from the helper API and then those notifications will be subscribed to by the main API. Something like this:
The difficulty in being a 3rd party contractor is learning how to get the existing build system to work for you, and in our case to deploy the AWS Lambda. It at times feels like we’re build engineers too – not my favourite past-time as I prefer to be working on the code not figuring out how configuring build scripts in a system I don’t know. Though that’s the point I guess – you need to learn to know. This is what's involved in coming in 'cold' from the outside - contractors need to know how to figure stuff out. Its stacked up against you otherwise.
I’ve designed the lambda to receive notifications through HTTP endpoints using API gateway and learn how the internal build system publishes lambdas using “serverless” scripts. So I’ve had to learn how to do that and as it turns out, serverless is a useful system.
The external integration tests, as well as the lambda, will talk to the main API like any other customer – through direct calls to the API via WCF/Http basic. I’ve incorporated this API calling code into a library that both can use so I don’t have to duplicate effort to connect to the API. The external integration tests are run using a cmd line app written in .net core 2 and so is the library as well as the lambda. I've come to love https://www.nuget.org/packages/Microsoft.Extensions.CommandLineUtils/ to help with this. I added some boiler-plate code here that contains the skeleton code for coding up a cmd line app like this in minutes. check it out here. Here is what it basically looks like:
public static int Main(string[] args)
{
var app = new CommandLineApplication();
app.Command("Command1", target =>
{
target.Name = "Command1";
target.Description = "Command1 Description";
target.OnExecute(() =>
{
Console.WriteLine("Command1 not ready yet");
return -1;
});
target.HelpOption("-?|-h|--help");
});
app.Command("Command2", target =>
{
target.Name = "Command2";
target.Description = "Command2 Description";
target.OnExecute(() =>
{
//work
return 0;
});
target.HelpOption("-?|-h|--help");
});
app.Command("Command3", target =>
{
var argPortfolioNameOrCode = target.Argument("Portfolio name/code", "The portfolio name or code");
var argPortfolioType = target.Argument("Portfolio type", "The portfolio type");
target.Description = "Command3 Description";
target.OnExecute(() =>
{
if (MissingMandatoryArguments(target, argPortfolioNameOrCode, argPortfolioType))
{
return -1;
}
//work
return 0;
});
target.HelpOption("-?|-h|--help");
});
I added some handling for mandatory arguments because it appears that this is not provided by default. The nice thing about this is that you add commands, and then you can have integrated help for each command, much like many linux utils like git for example.
Anyway getting back oon track now, the internal integration tests are unit tests within the main API and just test the service project code. The external integration tests are via a cmd line .net core app.
One thing that has become apparent is that things take longer than expected when interfacing with people/systems that you don’t know. And sometimes I feel that people aren’t as responsive to your requests as they might otherwise be if you were part of the company(we’re contractors). The other is that is always more useful to take a top-down, wide bird-eye-view of your solution’s impact on the system. This allows you to envelop the problem and attack it from all areas and more importantly see all the areas. Once you know all ways in, you can focus on those ways in.
I’m also thinking that it might be a useful time to introduce a mechanism that reduces the impact of code changes in the service project, things like factories etc but I'm not sure what I need yet.
The point of all this upfront cost/work/time is that we now just need to focus on delivering the functional business requirements and hopefully just add new tests and re-run old tests as we go and this should ensure that we’re introducing working code that doesn’t break each time we make new changes. We will also have confidence that the API still works as expected via integration tests and that our code is testable and isolated.
I’m also reading an interesting book about Investing by Tim Hale called “Smarter Investing – simpler decisions for better results”, which is interesting and I’m also working on my investments project. I’ve recently implemented a means for a user to define their own entities and associate them with investments. Its still in progress but interesting none-the-less. It's on its own branch at the moment until I get it fully integrated. I've also done some Running in the heat.
I met up with my old colleagues at Citrix over the weekend to swap stories. We met up in Gerrard's cross at the Grey Hound. It's nice to hear what they are doing now. I then took a nice leisurely run back down my old running route and it felt like I was right back running from the office as per usual. That's one of the great things about this life is that you can run whenever you want. I even waited at the same bus stop shelter and took the same bus home. Quite surreal really.
Anyway more to come as the trials and tribulations around API integrations, software design decisions and who knows even a little running.
- Details
- Category: Blog
- By Stuart Mathews
- Hits: 2711
Its been a rather eventful week. I’m still trying to get rid of the Nintendo music playing in my head. They had that music playing out the bushes in the resort the whole time.
It was pretty sunny and a good day to be out. I got bored at times and I thought that maybe that's what being a parent might be like - all the time…You’re bored but you’re actually ok with it as long as everyone else is happy.
This whole experience has made me think about parenting and children. Basically, you sacrifice your life for your children and everything is for them. Their excitement and happiness is your excitement and happiness.
Small kids have such short memories. They can have a full-blown meltdown over a splash of water you playfully whipped up on them from a gentle stream running down a slope at Legoland and the next moment you’re holding them in your arms trying to console them over your seemingly darsteadtly act. Oh, and you feel so bad for making them cry which is weird because it's not often that a seemingly fun and playful act results in tears. Your brain is like what? why crying, that was fun – I don’t understand, help, this is new…
It's easy to see why you’d become overwhelmingly intoxicated with emotion and heartfelt compassion for them. I guess this is what turns on within the brain of a parent as they raise their children. It's surely an evolutionary trait. Everything about them is so fragile and innocent and they really cannot fend for themselves. You must intervene at all times if they are ultimately to survive, nothing depends on you more than that. Its quite humbling and scary and I’d admit that at times I’d been over-worried about certain things. I’m not a father but I can see what might be involved.
There is the story about Phil Collins’ little boy who fell to his death crawling off a balcony. You have to be vigilant, it's important and crucial. The song Tear Drops In Heaven is about it. You have to keep your eyes on them all time.
You also start to detect the pitch and sound of your people, like in the wild, a spotted wild dog periodically chirps to get a response from her pup. Its the same like in being in a noisy shopping store, its like sonar you hear the mom call out to the kid, and the kid reacts and all is good. That’s one way to not have to have line-of-sight all the time. I didn’t know that, so I’d be constantly scanning up and down watching feverishly at all times. Dumb me.
And everything takes such a long time, dressing, washing, eating, sleeping and all of the time seems to involve talking in the background, like all the time. Answering questions, explaining things, asking, nudging, telling. It's certainly a full-time job. But you do it without thought or spite because of that realisation that it's so overwhelmingly necessary to do so and how wonderful it makes you feel to be so important and instrumental.
I just went to the gym in the mornings and let them go through the laborious and time-consuming phases of getting ready – which seems to take a lifetime for the reasons I mentioned above. I think it was a good strategy because by the time I’d come back, they were ready to go and no one was stressed out. And then boy did we go, place after place after place. I’ve been to more places in one week than I’ve been to in a whole year.
It was like persuading cats to do my bidding and that their best interest is what I’d say. It's a strange thing because you want them to have fun but you want them to be careful but you feel that they are not, so you have to instruct them which makes you feel like you’re trying to run their lives but you’re really not and just want them to have fun and be safe. These things are tough to do at the same time. I’m a buzz killer I think at times, trying to over plan, direct and star in the movie. After a while you realize that you’re really not staring in this movie, you’re outside the frame and that that's ok – as long as everyone else is happy. I think that’s what parents are about. It's a thankless job but you don't need thanks.
I’d have to plan ahead because I knew that if I didn’t warn them about the next train stop being ours or that they should go left now and right next that there would be agitation, stress and no-one wants that. But I think as a parent, you almost become a mindless zombie when it comes to having to deal with things that otherwise a single person would avoid. An example is stress, confusion and pain – these things you just become familiar with and you gain some resilience to it. I’m not a parent, I’ve merely been a temporary, pseudo-in-place-father and I’ve had to become accustomed to it and learned that with a child, without experience or real-world consciousness, they don’t really avoid it, and as such as a parent you have to deal with it for them.
Being self-sufficient, I’ve been able to hone many aspects of living, strategies ways to accomplish things in an optimum way. All this evaporates with young children. So the best you can do is to look to the imminent future and predict what could happen and prepare for it. Not try and manipulate things towards a predictable outcome, because you can’t do that but just try to be prepared. That’s why you see parents all equipped to the hilt with bottles, blankets, strollers, prepared foods, emergency things that you’d not think about because you know that you’ll not need it because you’ve planned the outcome of your day. Children and circumstance make this impossible. That’s why they head to the lifts. I think the modern day lifts in cities are less for the elderly and more for the pram. When is the last time you saw a granny shopping in a Gap store in Oxford-street? Never.
People do grow when they become parents. I’ve seen this now, they mature and they have a sense of worth and dedication to a life that before they didn’t. I’ve said this before and I might be wrong but sometimes I think people have children because they have reached what they feel to be the ceiling of their lives(and this might be biological too). Sometimes I think people get bored of life and then get married and have children to remedy it, to add a newness to their waning experiences. It certainly does change things and it certainly adds meaning to their lives.
I played the pseudo-dad for most of the time, supporting and facilitating, trying to see the future and trying to prepare for it. Most times it involved carrying things, doing things and being an Oracle and trying desperately to be good at it. Sometimes you feel you’ve let down the team if you don’t know an answer to something. Other times these seeming shortfalls are overlooked if you just say you don’t know instead of trying to extrapolate an answer.
I’m still haunted by Phil Collins’ little boy though. Makes me quite fearful of the potential of the future and then this makes me determined to try not be a victim of it.
The weather has been great, being on the London eye was lovely and peaceful, I think although I’ve been told it was really boring, for me it eclipsed Shrek’s Adventure, the west end musical, Disney stores, high-street retailers and Lego Land(which to be fair I really enjoyed). The London eye and also all the connecting parts, the travelling is what I enjoyed the most. It's kinda weird that all the bits in between getting to and from places were the most enjoyable for me(and most annoying for everyone else it seems).
Like just sitting in the pod in the London eye, watching the world from up high, everyone is peaceful and you are almost literally in your own little bubble that you can almost have a grip on, you get to just watch and listen and see and do nothing – just watch and feel the world and not actually be in it. Funny enough both the travelling and the London eye has been cited as the most arduous, boring and painful of the experiences this holiday.
I’ve come to enjoy the predictability of being in a train, knowing when to get off, knowing that nothing will happen while we're traveling, its where I can plan things and most importantly, now that I come to think about it, is when I can really turn off and embrace the sights and sounds of the people around me. I'm not an outgoing person, I'm an introvert and when I can listen, not instruct, just listen and watch is when I'm at my best.
I’m not someone who is a born leader so instructing people is not easy so I’m not that good at it. I might come across quite rigid. It gets quite hairy in places I don’t know, receiving questions requires more working-out and reasoning and thinking than places you’ve been before. Buses you’ve never taken before, trains that go places you’ve never been before. All while making sure everything is ok, is kinda challenging but kinda worth it. It's like solving a puzzle, you feel great after doing it.
Its a great feeling listening to conversations on the back of a bus from a long day out. Its long but I enjoyed that more than the shopping on Oxford Street or the glitzy amusements trying to grab your attention. The only thing that has your attention on these long hot commutes are those around you – and its fairly peaceful – not much goes wrong when is on a train – except when a 4-year old needs a wee and its 11 pm and the station you’re forced to get off at has no toilets…improv is the name of the game.
Its safe to say that my life has been shifted out the way somewhat, I’ve made way for folding prams, carrying and lifting things(but not weights!). My single life has had some repercussions, however - having only one of most things(a slant at optimisation) is at odds with parenting where there is a lean towards the convenience of now and over the burden of it tomorrow.
As a write this, I have a pair of little boots on my desk, that look like they should belong to a miniature human and they do.
Sprawled out over my living room there are clothes(mostly pink), bags, opened toys and unopened ones(also mostly pink), shopping, blankets, suitcases(also mostly pink), Disney merchandise where once there was just an empty carpet, a nicely positioned rug and lots of empty space.
It's not a bad thing nor is it a good thing, its just a thing - an interesting thing.
- Details
- Category: Blog
- By Stuart Mathews
- Hits: 2773
I watched American Sniper last night and just before that I got a stubbon C++ program to compile/link against a static library. I ended up just recreating everything instead of trying to figure out why it was not linking-up together as it should have done. That was quite frustrating.
I'm developing something similar to https://github.com/stumathews/stulibc but for C++ because I'm currently reading up about game architecture and design and most material is in C++. This is mostly because you can design quite nicely in C++ in terms of abstracting the complexities of systems. That being said, you can also wrap yourself knee-deep in complexity trying to abstract the complexity. That's a true criticism I guess of most things but C++ makes it easier.
It far too broad a dialect. I've rarely used more than a handful of features and in most cases its reall just been C with OOP because that's where C++ shines. I'm looking forward to flesh-out my design strategy so far which is A simple game engine architecture. I've always been a fan however of C and should get back to writing an interface to some electronic project via the Linux kernel at some point. I'll get there.
I also went to gym this morning, the sun gets up in the sky so early in summer that it forced me to wake up at 05:30 today and I figured I'd get in some work done before heading out the the airport. I've had a good couple of runs/workouts recently in-spite of working in the City again, though now I've started to run home again too. I'm really a Creature of trend and once I've got a pattern I latch on like a carp.
My next week it set to look somewhat like this: LEGOLAND, Shrek's Adventure, London Eye, Prince Edward Theatre, Harrods , Disney Store, Disney Cafe, H&M, and Party Shops.
More outings in a week than I've done in a year.
I've also recently realised that I don't have bath plugs, so I'm going out to go buy some. I don't usually have a bath, but now that I know that I can't, my priorities have changed.
Started reading about construction recently, took out a book about what's involved and its pretty interesting. Its this one: https://www.amazon.co.uk/Construction-Mathematics-Surinder-Virdi/dp/0750667923 and I need more time to work on the examples If I'm going to get anything out of it.
I got interested in the science behind construction while reading a book about how to reduce the complexities of modern science through the simple checklist.
The author had visited a civil engineering company next to his hospital(he is a surgeon) as he wanted to know how they dealt with the complexities of standing up huge buildings.
These construction sites sometimes seem like a haphazard-like setup however for me whats more interesting is the trust engineers and designers have in concepts like weight-distribution, strength and ratios that ensure things don't collapse. Its simple to think that pythagoras, volume and various geometrical and arithmetic concepts are the domain of the class-room when they in fact save lives and enable extremely awe-inspiring feats in construction.
Right, bath plugs...
- Details
- Category: Blog
- By Stuart Mathews
- Hits: 3231
I watched the new Solo file on Sunday, that is the Han Solo film from Disney’s new range of films. I enjoyed it, the sci-fi-ness of it, the environments and scenery were cool. It's nice to be back in space and seeing fantastical concepts in motion. I also finally decided to invest in a new watch which had been a long time in the making. I finally settled on a Garmin forerunner 235 with integrated wrist-based heart rate monitoring. You can read about it here:Smash-run, Garmin 235 and various things. Its actually very nice and I’m very happy with my decision. So I’ve retired my long-serving and faithful Suunto Ambit 1 which was showing its age but was resilient.
I a couple of weeks back I added more functionality to my investment project, adding token-based authentication with JWT tokens as well as implementing the preliminary ideas behind role-based access. This now means that you can’t log in without signing up.
Also I added the first of my auditing functionality which keeps a record of the changes that have occurred throughout the investment predominantly but there are other audit activities such as ‘created a user’ etc. over and above the more useful activities within an investment such as ‘Changed value’, ‘associated factor x with investment’ etc that sort of thing.
That’s the login page, which protects all the pages unless a valid token is held by the user (having signed up and then logged in). The activity log that I mentioned looks something like this.
You’ll also notice that I’ve added a new top nav-bar which looks kinda nice although it doesn’t do much (except hold a logout button) but I think I’ll start moving more functionality into it moving forward.
There are some things that I’d still like to do but I’m as of yet uncertain how I’d like to do them. For instance, I’d like to add a value-system so that I can evaluate and compare investments. Particularly how values are performing. I’ve got some ideas about setting up a correlation matrix and/or Monte-Carlo simulation using investments stock history prices(time-scale end-of-day prices etc) but I’m unsure how or If I want to store this information in the SQL database. Also, There isn’t a consistent (and free) way to query financial information live across both UK and US stocks. So this is on the back-burner until I have a satisfactory plan of action.
I almost forgot I’ve also added per individual Group, Factor, Risk and Region relationships diagrams. What this means is that for a given one of those aforementioned entities, if you select one from any investment, it will show you all the other investments that also have this entity, pictorially – you could already see this relationship through the listing of related investments but I like graphs.
Apart from that there are a few other inconsistencies that I’d like to address such as making navigation to and from pages that link to other investments more useful, for instance on the graph above, I’d like to that node links on the graph to be clickable and I’d like (possibly) to include a history of past navigations – but this isn’t really all that necessary and more a nice to have.
Also, I’ve started doing a series of blogs on Math fundamentals starting and my latest post being ‘What are fractions really?’. This is an effort to expose some of the underlying assumptions that most people don’t have when they talk about mathematics which is a problem I’ve encountered and I’m sure many have too – been hurried through math without due diligence. You can explore further what I mean in my mini-rant about the topic of general education of math in this article. Most of my best insights so far has come from a book I’m reading called “Mathematics: Its Content, Methods and Meaning”.
- Details
- Category: Code
- By Stuart Mathews
- Hits: 3055
Recently I've added a nice way to visualize shared relationships using force-directed graphs to a web app that I'm porting(Java/Spring->C#/.Net).
The main reason why I wanted to visualize these shared relationships, is that it very quickly put a lot of information into perspective (and they're pretty and I like graphs, is that so bad?)
I used to use Neo4J but I've decided to implement the graph relationship myself using Entity Framework Core and for the most part, it's quite good. EF.Core isn't quite that same as EF.net(Fat version) but its damn near close.
What you're seeing below are various investment groups which investments are placed into and how they and other investments share those groups. I've also coded in a weighting system. So in effect, the bigger the dot/group, the more investments are in that group. I've got the same idea throughout my web app. This is what it looks like in a static screenshot but in reality, these graphs are 'alive' and have gravity you can swirl them around and they respond!

To do this, a little bit of javascript makes anything seems possible(Its Typescript actually). Here is the code that you can add to a component in Angular. The key part of this is the render () function which brings this all together :
import { Component, Input, NgModule, OnInit, AfterViewInit, OnDestroy, ViewEncapsulation } from '@angular/core';
import { ApiService } from './../../apiservice.service';
import { GraphData } from '../../Models/GraphData';
import { EntityTypes } from '../../Utilities';
import * as d3 from 'd3';
interface Datum {
name: string;
value: number;
}
@Component({
selector: 'app-graph',
templateUrl: './graph.component.html',
styleUrls: ['./graph.component.css'],
encapsulation: ViewEncapsulation.None
})
export class GraphComponent implements OnInit, AfterViewInit, OnDestroy {
EntityTypes = EntityTypes;
@Input() InvestmentId: number;
@Input() EntityType: EntityTypes;
name: string;
svg;
color;
simulation;
link;
node;
circles;
labels;
data: GraphData;
constructor(protected apiService: ApiService) { }
ngOnInit() { }
ngAfterViewInit() {
this.apiService
.GetInvestmentGraphData(this.EntityType, this.InvestmentId)
.subscribe( (graphData) => this.render(graphData),
error => console.log('Error occured getting graph data:' + error));
}
ticked() {
this.link
.attr('x1', function(d) { return d.source.x; })
.attr('y1', function(d) { return d.source.y; })
.attr('x2', function(d) { return d.target.x; })
.attr('y2', function(d) { return d.target.y; });
this.node.attr('transform', function(d) {
return 'translate(' + d.x + ',' + d.y + ')';
});
}
render(graph) {
const SvgTagName = '#' + EntityTypes[this.EntityType];
this.svg = d3.select(SvgTagName);
const width = +this.svg.attr('width');
const height = +this.svg.attr('height');
this.color = d3.scaleOrdinal(d3.schemeCategory20);
this.simulation = d3.forceSimulation()
.force('link', d3.forceLink().distance(90))
.force('charge', d3.forceManyBody())
.force('center', d3.forceCenter(width / 2, height / 2));
this.link = this.svg.append('g')
.attr('class', 'links')
.selectAll('line')
.data(graph.links)
.enter().append('line')
.attr('stroke-width', function(d) { return Math.sqrt(d.value); });
this.node = this.svg.append('g')
.attr('class', 'nodes')
.selectAll('g')
.data(graph.nodes)
.enter()
.append('g');
this.circles = this.node
.append('circle')
.attr('r', function(d) { return Math.sqrt(d.value) * 5; })
.attr('fill', (d) => this.color(d.value))
.call(d3.drag()
.on('start', (d) => this.dragstarted(d))
.on('drag', (d) => this.dragged(d))
.on('end', (d) => this.dragended(d)));
this.labels = this.node.append('text')
.text(function (d) { return d.name; })
.attr('x', 6)
.attr('y', 3);
this.node.append('title').text(function(d){ return d.value; });
this.simulation
.nodes(graph.nodes)
.on('tick', () => this.ticked());
this.simulation.force('link')
.links(graph.links);
this.simulation.alpha(0.8).restart();
}
dragged(d) {
d.fx = d3.event.x;
d.fy = d3.event.y;
}
dragended(d) {
if (!d3.event.active) { this.simulation.alphaTarget(0); }
d.fx = null;
d.fy = null;
}
dragstarted(d) {
if (!d3.event.active) { this.simulation.alphaTarget(0.3).restart(); }
d.fx = d.x;
d.fy = d.y;
}
ngOnDestroy() { }
}
This uses D3.js V4 to represent the node data into these lovely looking graphs. Look it up if you like it :-)
Apart from this, I've also implemented a new search component and well its kind of great, so I thought I’d mention it.
It's the kind that as-you-type it filters down your selection type thingy, something that's usually only really possible through the use of a JavaScript trickery and black magic(which for the most part is what development is). This, for a long time, is been an envy of mine and honestly, using Angular 4 with its new "Pipes" functionality makes this so easy it is disturbing.
Ok, so let me show you real quick what it looks like (see the new empty search bar at the top - before):
And as you type it filters out the collection(after), so this is showing only those investments that have 'tech' in their names.
Pretty cool, eh?
Anyway, how this basically works is that it uses is a simple search criterion and either match against your search term(includes it) or doesn't(filters it out)
First, you pass in your objects through the filter like this:
<tr *ngFor="let investment of Investments | filter : searchText">
<td>
<strong><a *ngIf="investment" [routerLink]="['/InvestmentDetails', investment.id]">{{ investment.name }}</a></strong>
</td>
<td>{{investment.description}}</td>
<td>{{investment.symbol}}</td>
<td>{{investment.value}}</td>
<td><a (click)="delete(investment.id)" href="javascript:void(0)">Delete</a></td>
</tr>
And then basically my investment objects are filtered down by the filter and the filter is defined with that criterion or predicate i was talking about back there that takes some objects out or in of the collection on the fly. Let me show you:
import { Pipe, PipeTransform } from '@angular/core';
import { Investment } from './Models/Investment';
@Pipe({
name: 'filter'
})
export class FilterPipe implements PipeTransform {
transform(items: Investment[], searchText: string): any[] {
if (!items) { return []; }
if (!searchText) { return items; }
searchText = searchText.toLowerCase();
// So notice here that I'm choosing what needs to be included in the collection and this is dynamically evaluated against what I type in. Awesome.
return items.filter( it => it.name.toLowerCase().includes(searchText));
}
}
And the result is quite a lovely experience in my opinion.
There is a video I made also:
Peace.
- Details
- Category: Blog
- By Stuart Mathews
- Hits: 3068
I’ve been working a lot in python recently.
Besides at work, I’ve also been working on converting my broker project which was originally written in C to python.
While doing this, I started to look at the network socket code first and I'm amazed how easily its able to serialize data over the wire using JSON with so little fuss. I'll explore this a bit further...
Now originally I didn't use JSON in the C version, I used MsgPack which is like JSON but it's a binary protocol and faster. Anyway, it roughly achieves the same goal - sending messages across the wire...So
One thing you always need to do in TCP programming is agree on a protocol between the two parties in the communication. Something like "I'll send the size of the packet first, and then the rest of that data...", then the receiving end should read only that much data.Also, you need to convert the bytes to network-byte order before sending and then converting to host order on receiving.
Anyway, you can't really get away from this whether you're in C# or in plain old C or Python for that matter. So just for interest sake, here is how its done in C compared to how its done in Python.
C:
/* readn - read exactly n bytes */
int netReadn( SOCKET fd, char *bp, size_t len)
{
int cnt;
int rc;
cnt = len;
while ( cnt > 0 )
{
rc = recv( fd, bp, cnt, 0 );
if ( rc < 0 ) /* read error? */
{
if ( errno == EINTR ) /* interrupted? */
continue; /* restart the read */
return -1; /* return error */
}
if ( rc == 0 ) /* EOF? */
return len - cnt; /* return short count */
bp += rc;
cnt -= rc;
}
return len;
}
Python:
length_str = b''
char = socket.recv(1)
while char != b'\n':
length_str += char
char = socket.recv(1)
total = int(length_str)
view = memoryview(bytearray(total))
next_offset = 0
while total - next_offset > 0:
recv_size = socket.recv_into(view[next_offset:], total - next_offset)
next_offset += recv_size
The python code comes from the jsonsocket source while the previous comes from code in Stulibc.
That was a little diversion that I found quite interesting while converting my C to Python.
Apart from that short little diversion, I’ve also been working generally in python recently at work and made some interesting code...
Chunking-up data before sending it up to the internet. This is very cool, basically turns one array into many smaller chunk-sized arrays. I like.
if(shouldChunk):
chunks = numpy.array_split(array(holdings), ChunkSize)
else:
chunks = numpy.array_split(array(holdings), 1)
Using list comprehension in python to construct objects in one-line, much like LINQs .Select() function:
requests = [models.HoldingDto(security_uid=request["securityUid"], holding_type=request["holdingType"], units=request["units"], settled_units=request["units"], cost=0, properties=None, transaction=None ) for request in chunk ]
Caching and reloading of data. In python its a doddle to dump and reload dictionaries and lists. It always amazes me how much we can do with simple lists. Here I'm serializing a list of dictionaries called ticker_to_isin having generated that list using a routine. Which, I need not be redone if we can cache the results...which we can:
if(isNewLoad or shouldResolveLusid):
isin_to_secuid = GetSecuids(ticker_to_isin, client)
pickle.dump(isin_to_secuid, open(secuid_cache_name,"wb"))
else:
isin_to_secuid = pickle.load(open(secuid_cache_name,"rb"))
return (ticker_to_isin, isin_to_secuid)
I've also had to call into Thompson-Reuters data scope platform recently. Basically to get information about RICs - Reuters Instrument Codes which I've come to learn are how Thompson Reuters identifies securities.
Here is how you can obtain a token and issues REST requests to data scope. The link above is to their documentation page which I used to come up with the code below. So, below I'm requesting Isin codes for Ticker aka RIC codes.
This also shows how easy it is to manipulate headers and send plain HTTP requests in python much like in Typescript of Javascript.
def GetDataScopeToken(username = "x", password = "y"):
headers = {'Prefer': 'respond-async', 'Content-Type': 'application/json'}
url = 'https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken'
json = {
"Credentials": {
"Username": username,
"Password": password
}
}
r = requests.post(url, json = json, headers=headers)
return r.json()
def GetDataScopeInstrument(token, source):
headers = {'Prefer': 'respond-async', 'Content-Type': 'application/json', 'Authorization': 'Token {token}'.format(token=token)}
url = 'https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/InstrumentSearch'
json = {"IdentifierType":"Ric","Identifier":source,"InstrumentTypeGroups":["CollatetizedMortgageObligations","Commodities","Equities","FuturesAndOptions","GovCorp","MortgageBackedSecurities","Money","Municipals","Funds"],"PreferredIdentifierType":"Isin","MaxSearchResult":10}
r = requests.post(url, json = json, headers=headers)
return r.json()
I've also discovered an interesting library (tqdm) which allows you to track iterative processes in a visual manner. Whats more is that it works with pythons concurrent.futures library to aid async code which makes me weak at the knees!. I like!
It's just a matter of tacking the functionality in for loops and boom - progress bars! Incredible.
Notice that I'm sending up 10,861 items and I'm dividing this into 200 item chunks, which is roughly 54 items a request. parallelize these and you're in business!
Here is how you can do it using a tqdm iterator:
iter = tqdm(HoldingDataDf.groupby("HoldingsDate"))
for group_name, group in iter:
iter.set_description("Processing Group ({name}), holdings = {size}".format(name=group_name, size=(len(group))))
# do work
And here is how you can do it using concurrent.futures:
with concurrent.futures.ThreadPoolExecutor(max_workers=MaxThreads) as executor:
futures = [executor.submit(SendHoldingsThreadFunc, chunk) for chunk in chunks]
kwargs = { 'total': len(futures), 'desc' : 'Uploading data' }
for f in tqdm(as_completed(futures), **kwargs):
f.result();
Basically, you pass the futures to the tqdm() function when they are completed.
One-line assignments. I like these. These are so useful because they are concise. I use them when I'm reading in command line switches:
InputFile = options['-i'] if '-i' in options else "HoldingsSummary.xml"
MaxThreads = int(options['-t']) if '-t' in options else 2
ShouldResolveTickers = True if '-k' in options else False
Verbose = True if '-v' in options else False
Scope = options['-s'] if '-s' in options else "tr"
ChunkSize = int(options['-u']) if '-u' in options else 200
DoSynchronous = True if '-0' in options else False
show_holding_count_by_date = True if '-c' in options else False
isNewLoad = True if '-n' in options else False
shouldResolveLusid = True if '-l' in options else False
shouldChunk = True if '-p' in options else False
start = int(options['-x']) if '-x' in options else None
end = int(options['-y']) if '-y' in options else None
dryRun = True if '-a' in options else False
timeout = int(options['-r']) if '-r' in options else 100
Cool huh?
Pythons humble format() makes life so much more manageable. Thanks!
print("python {program_name} -i <inputfile>".format(program_name=program_name))
Here is something I've not spent too much time on but its interesting none-the-less: Pythons ability to define types of arguments and return values:
def TickerToIsin(ticker:str) -> str or None:
result = GetDataScopeInstrument(token['value'], ticker)
value = result['value']
if( len(value) > 0):
isin = value[0]['Identifier']
print('{ticker}={isin}'.format(ticker=ticker, isin=isin))
return isin
else:
return None
See how it's saying, ticker is a string and that TickerToIsin() returns a string or None. Usually, you don't do this in python you just return whatever as whatever - just like in Javascript. But you can add this "typedness" if you like it seems.
Thats it for now :-)