Socio-Politico-Economic Technology

A Good Decision

Rarely is there the time, energy, motivation, or resources to make the right decision. Rarely is there a right decision to be made. Even if there is a right decision its legitimacy tends to be bound very narrowly by environment and temporality.

Instead, you should strive for a good decision. A good decision perfectly balances the cost of exploration with the benefits of exploitation. A good decision is resilient to incomplete knowledge and an ever changing decision space. A good decision sacrifices perfection for reliability.

A good decision will take you far.


Two Generals – A Story Of Nodes And Networks

I ran into an interesting problem the other day that I just couldn’t solve. It turns out that I could not solve it because it is an unsolvable problem.

The idea behind the project is simple enough; I want to create a distributed application that can run across multiple machines and update at a rate that would be acceptable for a real-time video game loop. In order to achieve this I figured that I would split the game space into nodes which can be linked to other nodes, generally by geographical nearness. Each server would host a set of nodes for which it would hold the master copy, and replicated nodes for those on the edge that are held by another machine. Through a priority rule each node can determine for itself if it should be the next one in it’s cluster of connections to execute (I’ll probably write more about this once I get it working). In theory, this can work because each node “owns” it’s own data.

Where the trouble comes in is creating and destroying connections between nodes. The challenge here is that neither node really “owns” the link, and in creating or destroying a link you don’t really want to commit to it unless both nodes are able to commit to it, at the same time. The trouble with messaging is that certain guarantees are impossible. By using a reply and acknowledgement pattern you can know with certainty that your partner did receive a message when you receive an acknowledge back, but you can’t know if they didn’t receive it because their not receiving it and your not receiving an acknowledgement looks the same to you; no acknowledgement. This scenario is generally pretty easy to solve; you send a message, and if you don’t get an acknowledgement within the time you expected to or your partner continues to send you other messages and no acknowledgement then you send it again. On the partner’s side, you need to be able to handle the possibility of getting the same message twice; you can do this by having a message id, message order indicator, or by being able to process messages in a way that is idempotent.

Back to the links between nodes; we don’t just need to know that our partner got the message, we also need our partner to know that we know that they got the message, and that we are now both committed to the action. As I came to discover this problem is more commonly known as the two generals problem and it has been formally proven to be unsolvable. On a network without 100% reliability (all networks) there is no way to ensure that both servers are committed to some change in data, and therefor no way to really have two equal and independent master copies. Given this constraint I have two options. One solution is to use a series of messages to make the possibility of something going wrong increasingly small until there is a tolerable possibility of failure. This is basically how GUIDs work – it is not that it is not possible that they would ever collide; it is simply that it is so unlikely that you just don’t worry about the possibility. The other solution is to rework my design to make it so that one of the nodes owns the master copy of the link and just keeps messaging the other node until it confirms the change.

I’d love to conclude with the solution here, but honestly I am still working through the idea. The links are integral to how each node updates, and if it has the wrong set for even a single cycle it could put the process into an invalid state.


Mike Posner – Everyone Wants You To Forget

“I realized everyone wants you to forget you’re going to die. Because if they can convince you you’re not going to die, you’ll waste your time doing what they want you to do…

…One day, I’m going to die, but before then, I’m going to live, live, live the way I want to live.”

Mike Posner

Peru – Cusco – Humantay Lake

I went for a hike the other day and I think it nearly killed me! It is amazing how much more work it is to breath at altitude.

Socio-Politico-Economic Technology

Anywhere, Anytime

I don’t have any hard numbers to prove it, but it is my observation that the future of work (if you can call still it “the future”) is for knowledge workers to be able to work anywhere, anytime. While it was absolutely possible before I think that the COVID pandemic really served to shatter the matrix and accelerate the trend. While some loathe the passing of the office and being “always connected”, I for one embrace what it has to offer.

On a micro scale working asynchronously and remotely means that I can decouple where and how I live from where and how I work, which ironically leads to a more natural integration of the two. Is the most important work you can do only available in the city, but you prefer the country? Now you don’t have to choose. Do you have important work you need to get done, but your doctor is only available in the morning? Now you can do both.

On a macro level async remote work is a huge win for society too. For one thing, it is a huge win for traffic and pollution. It decreases the need for people to crowd in already crowded spaces, and the people in those crowded spaces will benefit from less traffic due to less miles being driven on average per person. Less average miles per person also means less pollution per person. Additionally, the flexibility frees up resources. People working and enjoying leisure at staggered hours instead of all at once means that resources that suffer peak usage periods will suffer them less as the usage is spread out (think highways, potable water systems, electricity, grocery lines, gas stations, etc.).

While there is some concern of always being on, I think if that truly becomes a problem it is worth evaluating if you have an unhealthy relationship with your work and if that relationship is being driven from within or from without. It is sad what happened to accelerate this trend, but I for one am excited to explore the limits of the era of anywhere, anytime.


Headspring (Accenture) Blazor Q&A

The following is a post I was involved with while working at Headspring (now Accenture).

If you missed our Blazor webinar—well, you can watch it any time here: Getting Started with Blazor. But we’ve also compiled our answers to the myriad of questions we received from our viewers. Because Blazor’s such a new technology, people are super curious about it. See what our presenters—software guru Glenn Burnside and software consultant Kevin Ackerman—had to say about topics ranging from Blazor’s setup to its relation to other technologies, and how it impacts both end-users and development teams.

Q: Do you think a Blazor will be a good competitor to some of the more currently used common JavaScript frameworks?

Glenn: I don’t know that I would call it a competitor as such. But I think that there’s a big market for a lot of front end application frameworks. So where we think that this can be really well-suited is if you have a development team that’s primarily a dotnet shop and you maybe don’t, have as many specialized roles on your team between front-end and back-end developers. Because this lets you bring one common language, one common runtime, to bear on both sides of the equation–server and client. What we’re really interested in, and again, it’s really early days right now, but what we’re really trying to evaluate is what is the total time that it takes to build features in a rich single-page application model. And by utilizing something like this where we can bring homogenous languages across client and server, does that actually increase the throughput and the quality of what the team’s able to build or not? Because we’ve definitely seen as front-end richness has grown that the cost of building those kinds of applications has risen with it. Even though we’re not necessarily getting more features done, but we’ve got people now, and especially in the business world, that have a much richer expectation of what a web app should be like.

Q: Are people just getting started with Blazor comparing it to Silverlight?

Glenn: Yes that’s the elephant in the room, it’s that comparison. And there’s no getting around that or denying it for, for sure. I think the big difference and the reason why we’re not going to watch a repeat of what happened with Silverlight is that when we’re working here and targeting WebAssembly for running, there is no plugin required. So there’s nothing additional that has to be added to or extended on the browser for these applications to run. To users of them, this is just a native application. There’s no difference between what we’re building here and building anywhere else. And so I think that’s one of the big differences.

Silverlight actually was on a pretty good adoption trajectory at the time it was originally released. A big part of what killed it, ultimately, was the rise of mobile platforms starting in 2007 and Silverlight’s inability to be used on those platforms because you had to add an extension into the browser. Obviously, that wasn’t supported on Android devices and iOS devices.

Q: You’re talking about being really excited about this, but what are the potential downsides?

Glenn: So, especially on a WebAssembly version where everything is running in the client, you are going to see a slightly larger initial payload. They’re doing some very aggressive things on stripping unused code out before they compile down to those final DLLs. And they’re doing a lot of minifying of that codebase. But your initial download could still be in the one-to-two megabyte range. That can be an issue, especially if you’re going to be deploying into targets that are really far away geographically and dealing with the latency of that.

Right now, there is a nominal performance tax, if you will, versus some of the native libraries. And that’s because right now in this initial version, the dotnet runtime is actually doing Git compilation and then running that, basically hosted inside the WebAssembly engine. It’s not necessarily doing direct to native WebAssembly pre-compilation of the bite code.

Q: What browsers does Blazor run on?

Glenn: In terms of browser compatibility, this runs on all modern browsers–and I say modern browsers meaning Evergreen, Chrome, Microsoft, Edge Chromium edition, Safari, Android, iOS, and Firefox. So as long as you’re not targeting Internet Explorer and older browser versions you’re good. I believe even the prior Edge version that wasn’t running on the Chrome engine would also support this.

So as long as you’re targeting a modern browser environment, you’re okay on compatibility and it works across platforms. You can also host it because it’s .NET Core, you can actually on the server-side host it both Windows and Linux as well.

Q: Can Blazor use an existing Razor .NET Core app? The goal being to start using it without rewriting the entire UI.

Glenn: Yes, absolutely. So it’s actually really, really easy to do that and you can either do it server-side or client-side where you can actually put that as a sub-unit inside your main app.

There are a few changes you make in the app settings properties on how you set up the middleware pipeline to direct to the routing endpoint. And then you can just start. Most of the time when we’re doing WebAssembly, the client-side piece of that actually ends up being hosted as a starting payload inside the backend, which is just a web API point.

So, yeah, absolutely, really easy to merge that in and start with a hybrid zone.

Q: What would be an easy way to pass objects from Blazor to JavaScript running on the page?

Kevin: That would depend on your scenario, but there’s an interop method. So if you’re actually trying to call a JavaScript method with your C# objects, it’s really easy. You just pass it the method name and pass it your object, and it’ll serialize it to a JavaScript object. there are definitely ways to do it, it would just depend on the specifics of your case.

Glenn: Yeah, I think that the interop story is still developing there, but just knowing that we have the JSRuntime class to use to invoke javascript functions is really valuable because obviously the JavaScript piece of web programming isn’t just going to disappear, poof, overnight.

Q: Can you integrate Blazor with legacy ASP.NET, not .NET Core?

Glenn: No. I mean, you can integrate in the sense that they can both be hosted side-by-side, but Blazor requires .NET Core. Those libraries are not going to be compatible with legacy ASP .NET, which doesn’t run on .NET Core.

Q: What’s the update story for a Blazor app? Both client and server. With shared code, this could potentially be an issue.

Glenn: So this is asking about if you’re trying to update the client-side and the server-side separately. It’s kind of like, in any app, when you’ve got a rich front-end piece and you’ve got a back-end API, what’s the story with keeping them tied together? Is that exacerbated because you’ve got these shared components between them? What do you think, Kevin?

Kevin: The only part that’s tied between them is that DLL for your shared project, and mostly that’s used for API model and things like that. So I think the upgrade story would more or less be that your shared project is going to be that shared piece that you need to upgrade alongside both applications where necessary. And if that’s a problem, you can also have multiple shared projects. You could actually have a part of it that you upgrade, a part of it that you don’t.

Glenn: I think with the way we’ve looked at this really is, we talk about the front end and the back end, but really those two pieces probably should almost always be on the same release cycle. Wouldn’t you agree, Kevin, in these scenarios?

Kevin: Yeah. Ideally. If you start having a larger application, you might have to look at that more, but ideally, if you can just keep the front and back end in sync, it would make it easier.

Q: If you have a client-side Blazor app and then we make changes and we redeploy, do users have to clear their cache or anything like that, or will they able to basically just receive that new payload and update?

Glenn: The page refresh model on that will actually pick that up for end-users. This is almost like if you have JavaScript code. If I have a piece of JavaScript code in my front-end today and that changes, the clients are picking those up automatically, as long as you’ve got your cash header set correctly and that sort of thing.

And it’s a very similar situation here. So this is not an install or a desktop app scenario that we’re talking about. This is all going to be managed behind the scenes for the end-users.

Q: How do you avoid users bypassing authentication in Blazor WebAssembly?

Kevin: That’s on your backend. You’re actually just running a regular API server. So you have all the authentication options available to you that you would for any web API server project. On the front end, you can reuse some of that off code that’s determining whether or not you can make requests or what you can request, what resources you can access, what you can do with them.

You can actually reuse that model on the front end and backend. And because we’re ensuring that we’re always validating it on the backend, we’re always authenticating on the backend. Even if you have a malicious client, they won’t be able to access server resources that they’re not allowed to.

Glenn: Yeah. I think that’s a common theme that we keep coming back to. Architecturally, we’re really still designing our systems the same way we do today, where we’ve got rich dynamic code that’s running in the browser. You know, today it’s mostly JavaScript, Angular. React and it’s communicating to the server via an API. So you’ve got to secure that back end and you’re going to have to manage how the client makes those requests over HTTP to the server to be authenticated. The only difference now is that that client-side code is .NET-based and C# sharp-based.

But we’re still communicating via HTTP client calls, we’re still making those asynchronous calls and getting JSON payloads back. So architecturally, all those same concerns are in place for security, like you said, for repeated validation, for cache-management. None of that really goes away.

Kevin: You’re looking at the same story for the client’s side: the same concerns around any client code is code that can be accessed by the client, that can be manipulated by the clients. Which is why it’s important that the projects are divided in those three parts, so that you’re only shipping client-side code to the client. But other than that, it’s the same story as Angular or React or any front-end framework you’re using.

Q: What about support for server-side, pre-rendering?

Kevin: Server-side prerendering is a good use case for Blazor Server, if that’s important. There are also some mixed-used techniques where you can pre-render the initial payload using Blazor Server, and then rely on client-side rendering with WebAssembly. That takes a little more up-front work and might not be valuable enough for some use cases, especially if you’re dealing with internal business apps where the initial one-time load isn’t that large.

There are ways to do client-side Blazor with server-side pre-rendering, but there are some downsides of rendering the page on the server, and then again on the client. (See this blog post, for example: Pre-rendering a client-side Blazor application)

Q: Is Blazor WebAssembly currently supporting GRPC?

Kevin: I don’t think so. I mean, that’s a separate technology. You can use GRPC to make requests or you can use HTTP or raw TCP. As far as I’m aware, anything you can run as server code on the client, you’d be able to use. I don’t think I’ve seen any examples of that yet, but that doesn’t mean that they’re not out there.

Glenn: And I expect we’ll start to see more of that GRPC adoption cause we’re starting to see that pick up more in general in the .NET space. And obviously you’ve got potential there for higher throughput, lower latency, smaller payloads, less bandwidth. Overdoing full HTTP and JSON objects, especially when you’re really just targeting the backend API for your particular application.

Q: How do you debug in Blazor WebAssembly since it’s running in the browser?

Kevin: So the WebAssembly does support debugging. You are sending symbol information down. That is an evolving story still, I’m not sure where it’s landed at the moment. The other thing is if you have any issues with the client-side debugging, you can switch over to server-side and now everything’s running on your server and it’s just regular C# code at that point.

Q: How does Blazor relate to MVC? Or is it a separate technology?

Glenn: Yeah, so, at this point, I think I’d say it’s a separate technology. ASP.NET MVC has kind of evolved: It went through the split to MVC and API, and then we got the newer version with an ASP.NET Core using Razor Pages.

Which was really kind of the next evolution of that. So I think the story here is this is all still using the Razor rendering engine, and it’s still using Razor Pages and Razor components as the primary user interface definition model. But ultimately what we’re doing, especially with the WebAssembly version, is we’re defining all of those components, but running them inside the browser.

And so again, the back-end is just a web API project. Just like you would build any other web API project in ASP.NET Core at this point. So, related technology, but not quite the same as MVC. What do you think, Kevin? Anything you want to add to that?

Kevin: Just on the web API side of it, not necessarily MVC, but it’s still going to be very familiar. You’re receiving requests, you’re handling it, you’re returning a response. But yeah, the view side of it is moved over to the client, and so that’s going to be a little bit different. But you’re still working with Razor Pages, so it’ll still be a pretty familiar experience.

Glenn: Yeah, I think we all found it was pretty easy to get started and wrap our heads around it. It wasn’t too big a shift in our programming mindset. It was more just about where the code ran.

Q: In getting started with Blazor, it looks like it’s mostly based on components for UI. What about non-visual components, like custom timers or logical data sets, etc.?

Glenn: So actually, I think that’s the great thing. This is one of the things that’s really exciting to me about this, because you’re absolutely right, it is primarily about UI. But it’s all .NET code. So, at any point, you’ve got access to anything that you want to write in C sharp, non-visual, and you can use dependency injection on the client-side code to have it pushed into your UI components and run for you.

Kevin: Yeah, that got me thinking. I don’t actually have any totally non-visible components, but you can see here, if we just took this piece out of it, it’s not going to render anything. So you can have non-visual elements.

You can use cascading parameters, which I don’t think we have time to talk about here, but you can use something like that to pass information down. So, for instance, the form component itself is actually a non-visual component. You don’t see the form, you’d just see the form fields, but the form ties them all together.

So, you definitely can have non-visual components and you could even have one that doesn’t show anything and then later in your application, it does. So that it doesn’t make a distinction between the visual and non-visual, but you can definitely make both.

Q: What about non-visual components that trigger events?

Glenn: I don’t know about that one. I think at that point, you could write it as a Razor Page. It just doesn’t render anything. And then when something else happens, that triggers an event, and it could cause something to get rendered or send off a message to another one of the components in the render context to be able to fire up and run.

So I think there’s a ton of opportunity for those things out there. And, certainly, a lot of the major component vendors like Telerik and ComponentOne are starting to ship out Blazor libraries that run both server-side and client-side.

Kevin: I just wanted to show—I’m not sure exactly what we’re talking about, triggering events, but you can create a div that doesn’t show, you can have any of the JavaScript typically that you’d have on click, on load—all those different things.

And then you can actually have it handle the method. You could have a, message down here that’s just public void DoSomething.

And now I can have it set, like on page load. So you could have a component like that and it will run this code in here. You can have multiple methods on it. You can use any of the JavaScript events. I hope that covers the question.


Code Generating Code

Code that generates code can sometimes be difficult to follow, but it provides huge benefits in saving time and making your end code more maintainable. Today I want to talk a little bit about when and why you would use code that generates code, and what your options are in C#.

There are a few common scenarios that come up where code generating code can be a powerful solution. One scenario is repeating similar logic, such as method signatures. A common case in C#, because you must declare generic types explicitly, is the pattern of offering the same method but with a differing number of generic overloads. For instance, a recent example I wrote is a utility to make generating a strong hash codes given the variables on the type. The method signatures look like this:
int Combine<T1, T2>(T1 value1, T2 value2)
all the way up to
int Combine<T1, ... T16>(T1 value1, ... T16 value16)
I could have wrote this instead as
int Combine(params object[] values)
which I did also create for greater than 16 parameters, but I wanted to avoid unnecessary allocations in common scenarios. By using code generation I was able to write this method once and have it generate the other 15 iterations of it automagically.

(Note: I discovered in hindsight that there is now a System.HashCode type that does the same thing and actually looks very similar. Although I will be removing this type from my code I will continue to use this technique for similar scenarios.)

Code generating code is also useful in scenarios where you need to create some boilerplate when you add new types to your program. For example, if you are writing your own data access or serialization you may need to generate a mapper for each of your DTOs. You could put this on the other programmers on your team to remember to write the mapper when they create a new DTO, remember to update the mapper when they update the DTO, write it correctly, and update all of the mappers when the mapping code is updated, or, you could right a code generator that centralizes the logic and ensures that it stays up to date with the rest of your application.

The last common scenario I will discuss is the reflection based registration type logic that usually happens at the start up of the application, and especially if you are using a container. While this is usually the best place to handle slow reflection based logic, it is not ideal for the boot up time of your application. As we move increasingly more towards smaller and smaller apps that are spun up and down frequently in the cloud, it is important to optimize the start up time. Even more so if there may be no instances of your app running until a request comes in. Instead of performing this reflection at run time it can be performed in advance at compile time to create code that manually registers all relevant types. This buys you the best of both worlds; the convenience and maintainability of automatic registration with the performance of manually registering everything.

In C# you now have two major flavors of code generation; T4 templates and Source Generators. I will not give a comprehensive explanation of both technologies and how to use them here, but rather focus on what scenarios I have found where they have some benefit.

T4 Templates have been around for some time, and are useful in scenarios where you would like to be able to readily see the code and use it with IntelliSense elsewhere in your application. T4 Templates are generally configured to run every time you save the file and generate a cs or similar file with a matching name. The nice thing about having the generated file is that you can inspect and debug the code right in your application as though it is any other code file. The one gotcha is that you should never manually update the code file, but instead should always update the template to create the generated code you want. The last thing I want to discuss with T4 Templates is that they run individually in a sandboxed environment. This makes them run faster and avoids referencing other generated types, but also means that you must manually reference libraries to include in that sandbox, and that there may be some other quirks like the T4 template using a different version of C# than the rest of your application.

Source Generators are a newer technology for code generation, and have a slightly different use case than T4 Templates. Instead of creating code files directly in your project Source Generators inspect the project after it is compiled and add their code directly to the output assemblies. Due to this you can’t directly see a code file, or reference generated types with IntelliSense in your source code. This technique of code generation is less of a means of automating what you could write by hand and is really more of a way to move the performance hit of reflection to compile time instead of run time. Source Generators are usually placed in their own project where you can import the libraries they need and control the version of C#. One scenario you need to be carful when using Source Generators though is referencing your main project from the generator project (not uncommon since you are replacing reflection in the main project). This can cause you to need to build twice on every build, once to make the build available to the source generators, and then again to incorporate the generated source into the output assembly.

I discussed here several common scenarios for using code generating code, and the two major flavors of it in C#, T4 Templates and Source Generators. By giving a high level overview of each I hope to have provided you with some guidance on which path is best to explore for your current project, or just inspired you to learn a little bit more about each of them.

Happy coding!


Solving Interesting Problems

There seems to be an assumption that people are generally lazy and need to be persuaded to action by an outside force such as money, or more abstractly, the threat of destitution. However, I reject this from experience.

What compels me to code is not an external drive, but an internal desire to solve interesting problems. Yes, I want to make money from what I do and ultimately I want my code to help people, but what makes it interesting for me is that it is challenging, like solving a puzzle.

I first learned to code through video games which allowed significant configuration through their editors, and spreadsheets which become so much more powerful with a few simple equations added. From there I leveled up my skills with VB, C#, SQL, HTML, CSS, JavaScript, and TypeScript, and was a fully capable programmer before I started taking formal classes with instructors to direct what to learn next.

Now I find myself on sabbatical and programming just for the fun of it. Sure, I hope some of what I am doing now will translate into a project later that will make money and/or help people, but I would do it again just for the challenge and learning experience. Programming just to solve interesting problems is the primary reason I am able to write code for other goals.

I sometimes wonder how the world might be different if more people were able to take the time to pursue self-directed goals.

You can check out what I’ve been up to here


Polluting The Unnatural Environment

This is an actual recruiter email I received (names redacted). I receive similar emails at least once a week.

As much as I despise mass marketing because it pushes the costs of your marketing onto your prospective customers, the industry, and society at large, I get why it is a thing that is not going away soon. But this is just lazy and wasting everyone’s time. Now their domain is going to get marked as spam, no developer will get a paid gig, more developers are going to stop looking at such emails altogether, their company won’t get a commission, and their client won’t get the programming expert they need.

Just because you can’t (or won’t) quantify the cost of your activity does not mean that it is free, and even if it somehow free to you doesn’t mean that you aren’t stealing those resources from someone else. The natural world is not the only environment that can get polluted.

The previous is a small (and admittedly petty) example of a problem I see at companies I have worked for and in the world at large. Everyday we are inundated with ads for <<insert_product>> made just for <<insert_name>>; it is bad for us as a society, and bad for industry. We pour ever more resources into stealing attention without creating more value not because it is somehow efficient, but because our current system is bad at addressing tragedy of the commons problems. I hope whatever arises as for society that it includes the proper infrastructure to solve such issues.


The Politics Of Consent

Consent seems like a simple concept, but when you really dig into it how you define it is one of the major rifts in the politics of our time. Consent, and what it means to have given consent, is at the center of all democratic ideologies. Where all men are said to be equal the definition of equality is of significant importance. The typical definition of consent will speak of permission or agreement on what should happen, but says very little of the conditions under which that agreement is made. I wish to explore that side of consent and show how differing definitions of consent are at the heart of our differing political parties.

What do I mean when I speak of “the conditions under which that agreement is made”? Well, before any contract can be agreed upon there must first be a negotiation, and rarely do negotiators meet on an even playing field. The outcome of a negotiation, the agreement, the final contract, are heavily influenced by the power dynamics between the negotiators.

As an extreme example of the role of power dynamics let’s imagine this scenario; one man is the owner of a lush oasis, surrounded by hundreds of miles of desert on each side, and a thirsty survivor wanders in from the sands. How they might interact will depend heavily on the power dynamics between them. The owner of the oasis, having been there first and having laid claim to all of it’s resources, may offer to allow the survivor to stay and eat the food and drink the water so long as the survivor agrees to do all the work to gather food and water for the both of them, as well as build the shelter and perform all the other maintenance necessary. Having no other option, the survivor may agree to this contract and give his consent.

On the other hand, perhaps the survivor arrived with a deadly weapon. Now having the upper hand, the same contract is struck but in the reverse, with the original owner of the oasis now becoming the workhorse of the survivor, having no other option and thus giving his consent.

Alternatively, in either of the above scenarios the two may instead decide to prioritize the collective best interest above their own and agree that they should both partake in the labor and enjoy the benefits of the oasis. While this scenario is possible, it is also unstable as long as one of the participants has more power than the other (for instance, if the survivor keeps the weapon despite having reached such an agreement), since the one with the power may always decide to change their mind, and it will always be in both of their minds when future agreements are made.

From this example I hope that you will see that while in one way each person has given consent, in another way they very much have not. These differing understandings of giving consent are at the heart of each political party.

One political philosophy views the mere act of reaching an agreement as a proof of consent; power dynamics either don’t exist or are natural, and questioning them is not important. Since no one would enter a contract that makes them worse off, all contracts make the parties to the contract better off, and are therefor good. In this view of the world it is ok that the powerful take advantage of their power and strike agreements that are primarily for their benefit, because a side affect of them pursuing their own interests is that they also create benefit for everyone else.

Another political philosophy acknowledges that power dynamics exist and that they can produce negative outcomes. They are aware that while paying minorities and women less is better than not hiring them at all, it is still a much worse outcome than hiring them and paying them as though there was not such a large gap in their negotiating power. They’ve determined that a subset of the working class is privileged because there is less of a gap between their negotiating power and that of the owner/employer class. If only they could erase the historical power differences between these two subsets of the working class then they could reach equally consensual agreements.

However, a new ideology that I see gaining momentum takes this a step further. Yes, a lower class basically being held at gunpoint in an employment negotiation can hardly be called consensual and produces negative outcomes, but also to some degree almost all employment negotiations are heavily lopsided in the employer’s advantage. This is due to a variety of factors, but one of the largest is that employers can typically afford to lose an employee without it hardly making a difference, whereas an employee who loses their job also loses their sole source of income and often a large part of their identity. To correct this difference in power dynamics or even shift if in the other direction they call for a larger reform to ensure that workers of all demographics are not dependent on owners and employers. There are many competing ideas for how this can be accomplished, from relatively minor amendments to the existing structure of society such as universal healthcare and base income, to much larger overhauls such as combining workers and owners into a single unified class.

For both moral and efficiency purposes consent for me is not just an agreement, but an agreement reached under relatively equal power dynamics. When negotiators are playing on the same field they can each trade concessions that have little value to them and high value to the other negotiator and maximize the outcome of the negotiation. When one negotiator overpowers the other they may demand concessions that have little value to them but high value to the other, since the other will be forced to agree anyways. For this reason my thoughts tend towards the new ideology taking shape. When we level the playing field we create a better and more bountiful world.

More than agreeing with me or arguing ideologically my hope is that you will see that not all consent is created equally. How loosely you are willing to define it and the conditions under which it can be acquired has a huge impact on what you view as morally imperative or reprehensible.



What is leadership? You lead, people follow, right?

To me it is a little more complex. Given the right title and enough force anyone can compel others to their will, but true leadership is something different. If people follow you only because you might hurt them or they might lose their job you’re not so much a leader as a you are a manager of human resources; and I mean that in the most derogatory way possible. To me leadership is creating a character and a narrative so compelling that others choose to follow without the threat of force.

True leadership is rare; when a man or woman embodies the shared story of those who would choose to follow. It is not the selfish act of building an empire for oneself, even if as a side affect it accidentally benefits others, but that of building prosperity for the tribe with intention. The true leader does not fear true democracy. They are not concerned that their followers may choose to follow another.

So yes, a leader leads and followers follow, but there is so much more to it than that.


Defaults and Deviations

Code should be necessarily verbose to describe it’s intent, but no more so. Boilerplate and other code that describes the same behavior over and over again makes your program more work to read and understand, and increases the risk of errors.

  • Copying similar logic over and over again leads to needing to read more code to understand what the program is doing.
  • When updated, copy and pasted logic must be updated in many places which are easy to miss and cause large change sets.
  • Having similar but slightly varied code hides the deviations, forcing you to read over nearly identical code just in case there might be a difference.

A better approach is to write generic code that can serve in the place of boilerplate. When you find yourself repeating some boilerplate ask yourself “Does the behavior of this deviate far enough that making it generic would make it overly complicated? If the behavior does not deviate now, will it likely in the future, or do both blocks of code resolve to the same conceptual idea?”. Each logical construct should be given only one definition within a code base.

For instance, is it useful in a web application to specify for each endpoint individually that it uses HTTP, that it accepts and returns JSON, that it returns a 200 HTTP status code when successful, a 422 when there is a validation error, a 500 when there is a server error, or would it be better to declare that all endpoints have those attributes unless specified otherwise? Is it useful to specify for each endpoint that it logs the request and result, that it performs database changes in a transaction, and that the request is processed through a half dozen layers, each with unique method names, or would it be more useful to put all the unique logic of a request together in one place and abstract away all the other layers to the request pipeline?

In short, most of your code should describe a default for your program or a deviation from the default. Specfying the same behavior over and over again is an antipattern that can slow down inital development and will increase the cost and risk of maintenance thereafter.


The Authoritarian

The parent that says “I will listen to what you have to say but I will make all the important decisions.”

The friend that says “I will listen to what you have to say but I will make all the important decisions.”

The spouse that says “I will listen to what you have to say but I will make all the important decisions.”

The politician that says “I will listen to what you have to say but I will make all the important decisions.”

The CEO that says “I will listen to what you have to say but I will make all the important decisions.”


Procedural Spaghetti

It is tempting for programmers who don’t understand design patterns or all the features available for their language of choice to write all their code procedurally. Do this, then this, then this, for every operation, regardless of how much commonality they share. The only reason this works at any scale is because some more thoughtful programmer spent the time and effort to build a framework that creates scopes for your code to live and which keeps your spaghetti from getting all tangled. Please don’t let the convenience of working within a framework keep you from writing more maintainable code yourself.

Socio-Politico-Economic Technology

Technology Is Only A Tool

Technology is often sold to the public as a kind of fix-all; the duct tape for all of society’s issues, and the engine that inevitable drives progress forward. As someone who is deeply vested in technology I used to buy into this narrative myself, but it seems much more appropriate to look at technology as a tool that can be used equally for good or for evil.

The best example I can come up with is the invention of nuclear energy. In the right hands, in a just society, nuclear energy can be harnessed safely and effectively to increase the power we have to increase productivity and decrease the need for manual labor. Dramatic increases in our ability to produce energy have been closely correlated with dramatic increases in our material well-being.

On the flip side, the same energy and the same automation can be used to make a large number of humans obsolete; to create a society that has abundance, but gives to few. Eventually we will reach a post-scarcity society where automation begets more automation with little human intervention, and what then? Even now there is a general consensus that supply-side, trickle-down economics is a deficient model to explain the modern economy.

And, more terrifying still is the existential threat that has come with harnessing nuclear energy as a weapon. While societies across the globe have never been great at just getting along, they’ve also never had the ability to completely obliterate each other and ruin the planet for everyone else in the process. More than once such a tragedy has been narrowly avoided by the decisions of only a few individuals.

Technology is a powerful tool, but in order to achieve progress we need to focus on our social systems with even more zeal. Left in the hands of the few technology will be used to benefit the few at great expense to the rest of society. We must be vigilant in ensuring that decisions about how we use technology are democratized; everyone, from the CEO of the largest tech company to the guy at home “liking” a post must have an equal say in the role we want technology to play in our society. We need experts to handle the details, but the masses should be the ones setting the course.


Worth And Worthiness

If you have aptitudes that are valuable today, it does not imply that they were valuable 100 years ago or that they will be valuable in 100 years. Further, just because they are valuable where you live does not mean that they are valuable elsewhere. You exist at the serendipitous intersection of geography, time, and talent. Though you may have worked hard to further develop certain talents, where you are now is largely a happy accident for which you should be gracious, but not proud. Have some empathy for those who are not so lucky.


Don’t Listen To Clients

Don’t listen to clients. Or rather, don’t take their first set of “requirements” as written in stone. Especially when there is no project manager filtering ideas.

(Don’t) Just Do It

Often clients will express their needs in terms of concrete functionality. In these scenarios it is easy (and lazy) to say you are just giving the client what they want, when really you are just giving them your interpretation of their interpretation a the solution to a problem.

Finding A (Better) Solution Together

The better thing to do is to work with the client to understand what they are really trying to accomplish; what does success look like to them? You will often discover that while their specific ask would sort of solve the issue, you can actually do much better because you have more context around what is possible and how the system actually works under the covers. They are experts in their domain as you are in your own, and you both have a valuable perspective to bring to the table. When this is the case, you should always present the alternative before going to work. They may love the idea, or they may reject it because they are stubborn or because you misinterpreted the real problem nonetheless.

Either way, the process of questioning the initial ask rather than blindly plowing forward reveals insights that help you produce a better solution.


People Change

I just finished going through a backup of a blog I had taken down years ago an uploading the posts that I still felt were relevant (pretty much all of the pre-2020 posts), and I was dismayed and heartened at the same time.

I was dismayed at some of the viewpoints that past me had held, and I was greatly surprised at how much my worldview has changed in the last few years.

At the same time, I was heartened that beyond my 20s I was still learning new things and developing a deeper insight about the world around me.

I’m eager to look down on myself again in ten years. 😀



A lot of values have been associated with NULL – zero, +/- infinity, empty string, +/- forever, and others. So which value is right? I would argue none of these. The only value that should be associated with NULL is… NULL, or rather, nothing – unknown. Using NULL to mean anything else is confusing, inaccurate, and complicates queries later on. Imagine you have records that are valid from a date and to a date, and use you NULL to represent forever in the to date. If you do this, you will have to do NULL checking anytime you want to grab all the currently active records, or any time you want to check the active records between two dates. Not only is this more difficult to work with, but it is also less efficient. Another common situation that comes up is using NULL instead of zero. When you do this, it means that anywhere you display the value to an end user you have to check for NULL and convert it to zero for display. But perhaps the most important reason not to use NULL to represent another value is that when you do that you cannot know if the person who entered that record meant forever or zero, or if what they meant was they don’t know. If I tell you that I have zero apples versus if I tell you that I don’t know how many apples I have, I am telling you two very different facts.



I recently discovered another form of UUID that I’m pretty excited about, ULIDs. ULIDs are Universally Unique Lexicographically Sortable Identifier (so UULSIDs?), but what does this actually mean? Like UUIDs, they are 128 bit identifiers meant to be assumed to be unique, though like other UUIDs collisions are hypothetically but not practically possible.


What sets ULIDs apart though is that they can be generated sequentially (with millisecond accuracy) by clients without central coordination. This is achieved by dedicating the first 48 bits to a timestamp, and the remaining 80 bits to a random value.

Another interesting feature of ULIDs that sets them apart from other UUIDs is the way that they are generally presented. While a typical UUID is typically represented as 36 character encoded with hyphens between segments, ULIDs are 26 character encoded with no hyphens using Crockford’s Base32 encoding. This encoding makes the value more compact, while also reducing the risk of transposition errors by omitting special characters and certain common characters that look too similar.


Combined, these features create some desirable properties. Identifiers can be created and assigned in a distributed fashion without accessing a central authority. This means that my application can create graphs of records offline before actually persisting them, and in general reduces the number of round trips I need to make to a database. A classic example for SOA applications is being able to create an identifier for an entity on the client and then use that identifier to persist data to multiple services without needing to wait for a response from any of them.

Since the identifiers are sequential, inserting into databases tends to be much more efficient, both in terms of insertion speed and in terms of reducing fragmentation. This means that we can insert records more quickly, query the data more quickly, and actually reduce the amount of disk space our data uses, generally with no downside. Also, because the timestamp is encoded into the identifier we automatically have a “created on” field that we can use in querying the data. The one weakness of this design is that it does reveal this timestamp in the identifier, meaning if you send it to a client they can infer at least that much about the record. This is acceptable in most cases, but it is something to keep in mind.

Looking at the choice of encoding we also see some advantages over other UUID formats. First, by reducing the length, unnecessary hyphens, and characters that are likely to be mistaken the value is much easier to read and transpose. Also, the value is more compact and URL safe, which makes it ideal for web applications where it is not uncommon to put the identifier in the URL path.

A Word Of Warning

A word of warning though; beware of using ULIDs as unique identifiers in SQL Server. While a ULID is a 128 bit unique identifier and otherwise meets the spec for the type in SQL Server, you may not gain the advantages of sequential inserts. The reason for this is that SQL Server has a proprietary order in which it stores the data for a unique identifier which would put the timestamp portion of a ULID in the wrong position to be useful for sorting.


The Purpose Of Education

I attended a public high school and state college, and during that time I had spent a lot of energy antagonizing over the question of what the purpose of education is. Back then I had an intuition that a lot of the stuff I was learning didn’t seem very important or relevant. Now that I am a few years older as a mid-career millennial I still feel mostly the same way, but I have a better understanding of why. When discussing what the purpose of education is I think it is helpful to separate what it is now versus what it ought to be.


To figure out the purpose of education as it currently stands I think it is helpful to look at the two predominate theories of what purpose it serves and then analyze what materials are covered or seem to be missing from the standard curriculum to determine which of the theories is most plausible.

The first theory for the purpose of education is that it is for the benefit of the educated and the citizenry at large. The idea is that education is meant to form well rounded citizens who are capable of managing their own affairs and interacting with the rest of society in a way which brings about positive outcomes. In this view of education pupils are meant to learn not just the bare basics of life, but also critical reasoning skills meant to provide them the means to elevate their condition and participate in society. While educating the masses might have some positive benefits for everyone who interacts with them, first and foremost it is meant to benefit the educated and the society they form between them.

In the second theory for the purpose of education, education is primarily a means of preparing the educated to become useful tools for employers. I chose the word “tools” not to be disparaging, but to make an important distinction between the two theories. While in the first theory the educated are expected to become more productive, it is primarily for their own benefit. By being more productive they can contribute more to society and also demand more for themselves. What distinguishes the education for the advancement of employers theory is that while the educated are taught useful occupational skills in both, in the latter they are also taught to be low power and obedient. While they are taught to increase their productivity they are also taught to ask for nothing in return. In this arrangement the educational system primarily benefits the employer, making higher education a requirement to maintain the same wages rather than a means to increase one’s salary proportional to the increase in their output.

What Is The Purpose Of Education?

At first glance, both of these theories seem plausible, so lets take a look at the evidence. Keep in mind that this is my perspective as a white, male, middle class, college educated millennial who graded fairly well while attending public institutions. *Your mileage may vary*

For The Educated

First, some arguments for education as a means of self enrichment. It would seem strange if the purpose of education was to create productive obedient employees to offer so much in the way of extra curricular classes and activities. For some extra curricular activities there is an apparent connection, such as shop class, but what benefit is it to an employer to have an employee who learned pottery or who was on the varsity swim team? These seem superfluous the the goals of employers.

Another argument in favor of education being to form good citizens is that at least to some degree it does. In surveys conducted year after year those with more education consistently scored higher on knowledge of what is happening around the world than do their less educated peers. While they still often vote from the heart, they tend to have a better understanding of what policies each politician is actually in favor of and what the affect of those policies might be. While knowledge is not itself sufficient to produce good citizens, an informed citizenry is important to making better decisions as a society.

For The Employer

Now, let’s take a look at why some might believe that the primary focus of education is to support the desires of employers. The first reason, and the one that was most obvious to me while I was in the system, was the lack of obvious life skills classes. It was incredibly off putting to me that in a nation where so much is purchased with debt that there was to personal finance or an everyday, every-man contract law class. Or, that while there was a class to teach you the proper way to dress and act in an interview, no attention was given to game theory or the art of negotiating. So much attention was placed on pushing you into that first corporate job, with no effort put into making sure you were compensated fairly for it, or that you would know how to manage the money you did earn. In addition, while sex is likely to play an important role in almost every person’s life, there is no nationwide standard for sex education like there is for subjects that are more relevant to the workplace. These, among many other obvious classes that have been omitted, make it seem apparent that the curriculum is not meant to cover things that would actually benefit the student in their everyday life. To further drive home the point, many institutions actually conduct surveys of employers to ask them what skills they feel recent graduates are lacking; to my knowledge no similar survey is conducted of recent graduates to ask the same.

A less obvious on the surface but possibly an even stronger argument that education is for the benefit of the employer is the way that courses are taught. Students are taught that for every problem they are presented there is one right answer, that the answer is known, and that there is only one acceptable way to achieve the correct answer. I distinctly recall being frustrated in math class because I would fail to memorize the gobbledygook formula to solve a problem and instead logically work my way to the correct answer another way. For this I was rewarded half credit; I had the correct answer, but I didn’t arrive at it the correct way. I also recall an incident in a physics classroom where the instructor asked the class “assuming the price per ounce was the same, in which city would you get paid more for a gold ring?”. I answered the question correctly, but was told I was wrong because that was not what was in the book (yes, I did look it up and double check afterwards). The professor was not interested in finding the right answer; he had the answer, and it was the students’ job to memorize that answer. This idea of “follow our process” and “don’t question our answers” is not a terribly useful skill when navigating the realities of life, but it does serve a useful purpose if you are an employer looking for a complacent, docile workforce.

The final argument I will present for education primarily supporting the goals of employers is what we have done with educational institutions during the pandemic of 2020. Despite the danger to the student body, the school staff, and ultimately the families of the students, there has been a huge push to open schools for in-person classes, even though many classes can be done virtually. While there is some credence to the notion that in person classes are more effective than remote classes it would be hard to argue that it is worth the risk of killing grandpa. Instead, it seems that the main draw to force schools to reopen for in-person classes is to use them as daycare centers so that the parents can go back to work. As far as I can tell the arguments presented about the quality of the education or students “falling behind” have been presented in bad faith, and are merely acceptable justifications to give for what are otherwise unjustifiable actions.

My Opinion

All of this said, I may have revealed my hand a bit when it comes to what I believe the purpose of education is. Honestly, it seems to me that it serves both as a means of enriching the individual and advancing the objectives of employers, but it also seems to skew heavily towards the latter. I think we want to believe that education is to benefit the individual and society, or that it is to benefit the employer and that is somehow not at odds with benefiting society. But, there are too many glaring omissions for it to primarily be for the benefit of the educated, and it is too easy to see where the interests of the educated and the employer are not aligned. Learning to follow a process but not think critically is not in my interest or the interest of society. Learning how to convince an employer to allow me work for them but not to convince them to pay me decently is not in my interest.

There are a lot of individuals working in education that I believe have the best intentions in mind, probably even the vast majority of the front line workers; teachers, counselors, etc. However, as an institution it would appear to have other motives.

What Ought To Be The Purpose Of Education?

So now we get to what the purpose of education ought to be. While learning the skills necessary to be productive in your occupation is no doubt important, to focus primarily on that alone is a mistake for two reasons. The first is that other than developing well-rounded individuals there is little you can do to prepare students today for a job market that probably doesn’t even exist yet. What skills are needed is changing so quickly and many of them are best learned through first-hand real life experience. I don’t think we can know what the jobs of the next decade will look like, and even if we did we could not prepare students in a way that is even the equivalent of a year of experience actually doing the job. I learned so much more the first year of working in technology than I had the previous 4+ years being educated for it. The second reason that focusing on jobs skills alone is a mistake is that there is so much more to life than working. Both from a lofty perspective of art, music, romance, travel, and family, and from the perspective of the mundane, like deciding if you should pay off your credit card or invest in your 401k. A graduate who is capable to putting his nose to the grindstone for his boss, but incapable of finding joy in life and of managing his own affairs is not an adult, but is instead a depressed child in a business suit.

When our systems are laid bare and we do not like what we see we have a duty to change them. While I have no intentions at this time to return to the formal “educational” system, I hope for others and our future as a society that the coming generations will pull away from what is and push towards what ought to be.


A Primer on Primary Keys

Early in my programming career most of the databases I worked on have had the luxury of having decent hardware and relatively small data (100s of thousands of rows per table). However, more and more I now find myself working with databases where 10s of millions of records per table is the norm. With smaller databases it is easy to just focus on “how can I store my data with the least amount of effort?”, but as things scale it becomes imperative that you carefully consider the performance implications of what you are asking the computer to do. One thing you can do to take a big step in the right direction is to learn how to select appropriate primary keys.

Primary key selection is important for a couple of reasons. First, almost every table will have one, and almost every mildly complex query will need to include it for joins, so knowing how to select one is a win that keeps on winning. Secondly, the primary key dictates access patterns, and how an application can go about creating new records. I think most people who have worked with databases are well aware of the first point, but I’m willing to bet even many DBAs are less familiar with the second one. Luckily, I will be discussing both.

Natural vs Surrogate Keys

The first choice you need to make when selecting a primary key is whether to use a natural or surrogate key.

Natural Keys

A natural key is one in which one or more existing fields within a record are also used to identify the record. For instance, if I identify a person by social security number, or by first and last name, or a building by address, or an airport by it’s abbreviated name (IATA code), or an order line by order number and order line number. Natural keys were much more common in older databases because they do not require persisting additional data, but they come with a major drawback; almost everything you think is a natural key is not. What do I mean by this? Well, if you were paying attention to my examples above you might have realized that none of them is necessarily a natural key. Social security numbers and addresses can be reassigned. IATA codes can be reassigned, or in theory an airport could have the code assigned to it changed. First and last name combinations are even worse since they are often not even unique at a point in time. The only example I gave that might be a natural key is order and order line number, but it depends on your business logic. Can the business reuse an order number when it rolls over? Will it ever roll over? Is it possible to have two order lines with the same line number (an example would be soft deleting lines)? If you can’t say with certainty that two records will never exist with the same identifier, or that an identifier would never change, then that identifier can’t be used as a natural key. Things get a bit muddier if you do not keep a history of the data and thus you *technically* can reuse a primary key at two different points in time. However, except where you must legally purge the history you can generally assume this will not be the case. Plus, even when you delete the record from the database, have you deleted it anywhere else it may still exist (a cached web page for instance, or an excel export, or even in the minds of the system operators)?

Surrogate Keys

OK, so if not a natural key, then what? This is where a surrogate key comes in. Instead of trying to identify a set of fields will represent a unique identifier now and until the end of time, we make one up. It is much easier to guarantee that a key will always be unique and will never change if it only exists as an identifier and nothing else. This is why you see a lot of primary keys that are integers, GUIDs, or a string value that is never displayed outside the database. For the cost of some extra storage and potentially needing to pull more fields into your queries you gain the advantage of having strong primary keys. In fact, some of the best natural keys I have seen are ones which include the surrogate key of a parent record. For instance, depending on your business rules you may be able to say with certainty that an order line number will always be unique within an order. Given that, if the order has a surrogate key, an order line can use a natural key of the order key and line number. Surrogate keys are almost a no fail way to identify records, and are quite common in business applications.

Sequential vs Non-Sequential Keys

The next important decision you will make is whether or not the keys will be sequential; does each value follow the previous one when sorted? This is an important decision because it has huge performance implications that I unfortunately won’t be able to get into the details here without turning this post into a novella. Just know that generally inserts are much more efficient when identifies are sequential.

Sequential Keys

The typical example of a sequential identifier is an integer that starts at 0 or 1 and increments by 1. This is great because not only are inserts efficient, but it is a nice small identifier which saves you space and even makes it possible to remember as you look through records. One hack you can use to get twice as many possible values is instead of starting near 0, start with the lowest or highest value possible and increment by one in the opposing direction (for example, in SQL Server INTs can go from -2^31 to 2^31-1). Also, you can save some additional space or be able to store even more records by selecting the correct integer value (in SQL Server your options are TINYINT, SMALLINT, INT, BIGINT).


Your typical non-sequential identifier is a GUID/UUID (globally unique identifier/universally unique identifier). GUIDs are (semi-) randomly generated 16 byte values (twice as large as INTs) that are assumed to be unique (I’m sure I will write more about this later). The great thing about GUIDs is that they are much more arbitrary; the value has no correlation to other fields on the record and has nothing to do with the order of inserts. This has security implications since given your entire database and all of your code a hacker still couldn’t predict what the next identifier will be. The downside to using GUIDs though is that inserts are slower and your tables will take up more space due to fragmentation (though these downsides can be mitigated with “sequential” GUIDS; so many topics for another day…). GUIDs have one other important feature that I will discuss next.

Decentralized Key Generation

An arbitrary surrogate key sounds good, it nicely separates the concerns of identifying a record from what that record actually stores and how it is actually persisted, but what does this actually buy us? I’m glad I asked 🙂

Since the primary key is arbitrary the database does not need to act as a central authority when assigning them. This means that our application(s) can take on that responsibility, which gives them the ability to do some interesting things. For instance, if you need to create and persist a complex graph of data (parent, child, and grand-child records) then you can do so without needing to insert each record of data and waiting to get an identifier back. Instead, you can create the whole graph, identifiers and all, and bulk insert all the records in dependency order. Also, if you have records that are related to each other but in separate databases (think micro-services) then your application can insert the records to both databases without needing to insert into one and wait for a response. Another interesting pattern that application generated identifiers allows is persisting data to an offline database to later be synchronized with the central authoritative database. Which technically this could be achieved with a complex process of reassigning identifiers on synchronization or by federating blocks of IDs (yet another topic), GUIDs are a much simpler and more elegant solution. A final pattern enabled by decentralized ID generation is pulling together data from multiple databases with similar schema into a single report. This is useful in environments where you have a database per tenant, but you want to be able to combine the data into reports for the business. While it is not always a requirement, decentralizing identifier assignment opens up some interesting design possibilities that are simply not possible otherwise.


So there it is, a “brief” primer on primary keys. I have attempted to keep this as short as possible, but I was not entirely successful. Nonetheless, I hope you found it interesting and useful, and I hope it will help you make more informed decisions when selecting primary keys in the future.


Why Should your Development Team Embrace Blazor?

The following is an article I wrote on behalf of Headspring – the original article can be seen posted here.

Blazor in context: The user imperative

As digital technology advances at accelerating rates, both external users and business software users expect websites to deliver more productivity and smoother user experiences. Excessive page loads, and the design limitations they impose, make a website feel significantly slower. Organizations investing in their user-facing experiences are seeing returns in terms of efficiency and user adoption/loyalty.

A single page application (SPA) delivers better UX and speed, but in the past, that has always meant more development time and a JavaScript-heavy frontend. But dotnet-heavy shops now have an intriguing option in Blazor, the new Microsoft front-end framework that leverages C# in the browser. If you’re thinking about enhancing your front-end experience, Blazor is probably on your radar—but is it the right framework for you? Let’s look at the concerns you might have when considering Blazor for your web apps, as well as the reasons you might choose it over other frameworks.

So how is Blazor different from Silverlight?

The first concern on everyone’s mind when they hear about Blazor is “Didn’t Microsoft try to do this with Silverlight?” This is a fair concern to have, but luckily they have learned from the past. With Silverlight marching towards the end of support, we can look back and ask what went wrong. The fatal flaw seems to have been built into Silverlight from the start: the reliance on a plugin. The assumption that users would opt to download it just to visit a website did not pan out for Silverlight. Plus, some browsers dropped support for plugins entirely. Blazor takes a completely different approach—it does not require a plugin or anything more than the browser you already have. Plus, not only does it work in all modern browsers, but it can also be configured to work in older browsers like Internet Explorer.

Some early-adopter anxieties

The other major concern is the immaturity of the framework. If you’ve ever been an early adopter of a certain technology, you know that it can be a double-edged sword. On the one hand, you get access to the latest and greatest features well ahead of the market, but on the other, you can sometimes find yourself downstream without a paddle. While Blazor is no different in this regard, it does have several mitigating factors going for it.

  • First, it is a Microsoft framework that has been in development and beta testing for years leading up to its first supported release.
  • Secondly, because of that beta period, there is already a community behind Blazor, providing support and building libraries for all your typical web client needs.
  • Lastly, it draws inspiration from other popular front-end frameworks that came before it and reuses existing technologies, such as Razor, rather than completely reinventing the wheel. If your team already has experience with other frontend frameworks they will find that Blazor has a familiar, component-centric approach.

Now that we’ve explored some of the concerns with Blazor, what are some of the indicators that it’s a good fit for your project?

You can leverage your Razor page knowledge

The most obvious use case is if you are already a dotnet shop looking to greenfield a modern single-page web application or to migrate an existing Silverlight or MVC application. While MVC will continue to be a popular option, if you are already using Razor pages, you can get a smooth, zero-page-refresh experience with little additional effort. This creates a better experience for your users while at the same time reducing the burden on your servers.

Blazor can enhance productivity

But even if C# isn’t your bread and butter, there is significant productivity to be gained by leveraging Blazor as your web framework. By using the same language for the back- and front-end code, you can avoid duplicating effort and reuse the same libraries, the same API models, and the same business logic. Plus, how many times have you had to work on a project where the front-end models no longer matched the back-end or validation on the front-end was missed on the back-end, thus leaving security vulnerabilities in your API. By using one language, you can reuse your code and know you’re covered.

Another advantage of using a single language is the ability to build bigger apps with smaller teams, faster. Standardizing the language reduces the necessity for specialized developers and makes it easy for everyone on your team to be a full stack developer. Dropping context-switching and waiting on someone else to do their part before you can do yours can save a ton of time when developing a feature. On top of all of that, you can use the same tools throughout the stack, onboarding is much faster, and your team can spend more time getting good at what they know rather than constantly needing to learn new versions of the same thing.

There are many factors to consider when choosing a framework for a modern web app. While each specific application will have its own unique factors to consider, you should now have a good starting point when evaluating Blazor as a potential solution. Whether creating something new, replacing an existing page-load-per-click website, or even replacing another SPA that has become hard to maintain—Blazor may help you achieve the productivity gains and enhanced user experience you need in order to stay ahead of the curve.


Securing All The Things

So I recently became a lot more security conscious and went on an encryption rampage to try to lessen my exposure to unwanted intrusions online. As such, I implemented a few solutions for protecting my data; A password manager, encrypting my personal data, and encrypting my network traffic.

A note before I describe my solutions – Please, please, please backup your passwords and data before attempting to add encryption (or just if you have not created a backup in a while). Not backing up your data is like not having insurance, or a smoke detector, or a fire extinguisher because, meh, it probably won’t happen to me. Don’t be that guy.

Another note; I am not affiliated with any of these companies and I am not compensated in any way for praising their products. I am usually actually very cynical about commercial products, but I believe in giving credit where credit is due.


After doing (a ton) of research, I finally decided that I would trust my passwords to the password manager LastPass. What is a password manager and why do I need one? Well, I will answer those questions in reverse. First, why do I need a password manager? The key to strong passwords that are difficult to crack is that they need to be long, contain a lot of characters, contain a variety of characters, and not be reused between sites. That last requirement is key. One of the biggest vulnerabilities that you face as a consumer right now is that hackers will break into the database of one company that you have an account with, and then they will take that password and try it all over the web to see if they can access your other accounts. The worst part is that the company that got hacked often wont even tell you that it happened. This means that a hacker can get your credentials from some random innocuous social media site, and then turn around and use them to take all of your money out of your bank account. To fix this, you have different passwords for each of your accounts – but this quickly becomes difficult to manage. This is where a password manager comes in. Using one REALLY strong password, you log into your password manager, and then it can auto-fill your passwords for you for all of your accounts. This way, you can have super complicated passwords for all of them, and you don’t have to remember any of them. Plus, good password managers (like LastPass) will also help you generate super tough passwords to crack. The best part is, your LastPass account is encrypted client-side so even the CEO of LastPass couldn’t steal your passwords if they wanted to. There is one security vulnerability with a password manager, that all your passwords are now in one place, but with a strong password and account encryption it will be (almost) impossible for a hacker to crack.

I’ve used Google Drive forever because I like the idea of my files being available anywhere, anytime I need them. But what I didn’t like is that my files were being sent over the web unencrypted, and stored unencrypted so that any employee at Google with high enough access rights could snoop on them. Plus, it is standard operating procedure at Google to snoop over all of your data to market to you and do who knows what else with. That’s where comes in. Sync is a cloud drive like Google Drive, One Drive, or any of the other major brands out there, but like LastPass it too uses client-side encryption. This means that before the bits even leave your computer they are encrypted, and they are not decrypted again until they are back on your computer. I love this solution because you can take your files anywhere, even your phone, and not risk them being compromised en route.


Tutanota is to Gmail what Sync is to Google Drive – an encrypted email client. While it is not perfect since you still need to send and receive emails from less secure users, it is at least a step in the right direction and will help you protect your communications without needing to send enigma encoded notes wrapped around cigars via pony express. Similar to Sync, Tutanota flips the default from the company having access to your data to only you having access to your data.


The last solution I added to secure my data is to start using a VPN by default for all of my internet traffic. Even though I don’t do anything nefarious, I do not like the idea that a government, corporation, or some random guy on the internet can see everything I do. It is like allowing a stranger to install a webcam in your living room; sure there’s probably no harm in it, but it’s more than a little creepy at best. The secret to a VPN is that it encrypts your traffic (starting to see a theme here) and then sends it to a server. There, your traffic is mixed with thousands of other users’ traffic, making it difficult to trace a request back to the requester. A bonus is that you can choose which server your requests go to. Do you want to see what it looks like to browse the internet in France? No problem, just choose a VPN server in France.

In Closing

Client-side encryption is your friend it should be much more sought after than it currently is.There are many other things you can do to improve your online security hygiene, but just installing a couple of programs will give you a good head start. Even better, some of these programs are free or there are similar free programs available.

So now you have no excuse not to be safe online!


Surfing The Web Like A Pro

If you are like me you spend a lot of time surfing the web. Also, if you are like me you love to optimize the things you do most frequently. My thesis is that the less time you have to spend grabbing your mouse and clicking around, the faster and more productive you will be. So with that I present to you how to surf the web like a pro.

Google Chrome

Most people still use Chrome as it is one of the fastest web browsers, plus it doesn’t have the stigma of being IE/Edge. Here are some of the most powerful shortcuts for Chrome (They may also work in other browsers, but I have not tested them there).
– ctrl+t = New Tab
– ctrl+w = Close Tab
– F6 = Highligh URL Bar
– pgdn = Move One Screen Down
– pgup = Move One Screen Up
– arrow down = Move Down
– arrow up = Move Up
– ctrl+f = Opens Up The Search Bar – you can start typing and repeatedly press enter to just to the next instance of your text on the page


Like Google, DuckDuckGo is a search engine. Now you might be asking yourself “why do I need a new search engine?”. Well, I’ll tell you why. Because unlike Google, DuckDuckGo does not track your activity for marketers, and therefore also does not bias your search results into a bubble. Some people like how “helpful” Google is – personally, I like to stay open and informed. On top of that, DuckDuckGo also lets you personalize your search interface. The only thing I don’t like about them is I still can’t get over how much I dislike their logo, but hey, you win some and you lose some. Here are some of my favorite shortcuts for DuckDuckGo.
– arrow down = Move One Result Down
– arrow up = Move One Result Up
– arrow left = Move One Search Result Type Left
– arrow right = Move One Search Result Type Right
– enter = Go To Page
– ctrl+enter = Open Result In New Tab
– ctrl+tab = Move One Tab Right
– ctrl+shift+tab = Move One Tab Left

Happy surfing!


Don’t Let Your Password “Crack” Under Pressure

Ever wonder how hard it is to crack your password? Well, you need not wonder any longer – Here is the formula:

Difficulty to crack a password = Character cases ^ Characters

Characters is the number of characters in your password. Character cases are all the possible characters you could enter in a password field. How many character cases are there?

  • a-z = 26 lowercase letters
  • A-Z = 26 uppercase letters
  • 0-9 = 10 numerals
  • Special characters on a standard keyboard (ex: ~!$%) = Appx. 32 special characters

Simple, right?

Adding characters or character cases makes a password more complex, and thus, more difficult to guess. Here are some password examples

A weak password – 6 lower case characters = 26^6 = 308,915,776 combinations

This seems like a lot, until you consider that a brute force attack (one in which the hacker just tries password after password until he guesses the right one) can try 8 million times per second. At this speed, it would only take 38.6 seconds to guess your password.

Now, let’s see what happens if we add one character.

A slightly less weak password – 7 lower case characters = 26^7 = 8,031,810,176 combinations

A huge improvement, but this password can still be cracked in 16.7 minutes.

Now, let’s see what happens if we use all possible characters.

A slightly less weak password – 6 alphanumeric and special characters = (26+26+10+32)^6 = 689,869,781,056 combinations

Better still, but this password can still be cracked in just under a day.

Now, let’s look at an example of a good password.

A strong password – 16 alphanumeric and special characters = (26+26+10+32)^16 = 6.5913323e+54 combinations

That’s the number 659 followed by 52 0s!

At the same rate, it would take the hacker 2.6126222e+40 years to guess your password (or more than an order of magnitude over one nonillion eons). If they are that committed, I say they can have it.

You can now see how with using a few more characters and using a few more types of characters you can significantly enhance the strength of your passwords and prevent an otherwise would be intruder from gaining access to your accounts.

Of course, this is all very simplified. Most hackers aren’t going to try to attempt every combination of every valid character. More likely, they are going to try the 100 most common passwords, use a rainbow attack, or other similar strategy to hone in on the most promising potential passwords. Still, using a longer password with more possible characters is a simple and effective solution to make it more difficult to crack.

If you want to learn more about what makes a password “strong” and how password attacks work, please check out the Wikipedia page on Password Strength.

Stay safe!

PS: For context, 2.6126222e+40 years looks like this:

26,126,222,000,000,000,000,000,000,000,000,000,000,000 years


Insuring Against Everything

If you insure against everything you will soon find yourself somewhat safe and totally broke. Running a business is risky business; Life is risky business; But the greatest risk is that of the fear of taking a risk at all.

Take steps to insure only when…

  1. …the insurance costs less than the cost of repairing the damage times the likelihood. This is rarely the case unless you have an exceptional situation for which the insurer did not adequately predict.
  2. …you cannot recover from the catastrophic event if it were to occur. For instance, most of us do not have the bankroll to pay out of pocket for heart surgery.

It is generally wise then to accept responsibility for everything else.


Married To Material

As much as we own our material things, we are owned by our material things. Every time you purchase a new product you have, in some ways, become married to that purchase. Now, you are obligated to interact with it; You are obligated to maintain it; You are obligated to care about it’s whereabouts and whether or not it has been stolen. Having things is a sign of success, but can often lead to a life of burden. A house must be painted; A car must be washed; Your dog must be fed. Like any marriage, make sure you love what you are asking to possess, because possession is always a two way street.


What Do YOU Want?

One of the keys to happiness is to not assume that you want what everyone else wants; To not assume you want what commercials say you should want; And especially not to assume that you want what others tell you that you want. You are your own person, you have your own desires, and living someone else’s dream will never make you truly fulfilled.


Consistent vs Adaptive

Being a good leader means constantly toeing the line between being consistent and being adaptive. If you fail to be consistent, no one will ever believe in you or believe that your vision is honest. At the same time, you must acknowledge when things are not working and when the world had changed around you and be willing to change with it. It is a fine balancing act that must be performed if you want to succeed in the long run.

Consistency builds authenticity, but staying ahead of and leading change is where true leadership happens.


Hello World!

This is the obligatory “hello world” post 😀