Simple Console Application in .NET Core with DI and Configuration

While the .NET Core documentation and libraries do a good job of providing an easy way to get started with hosted apps (web or otherwise), it is somewhat lacking in terms of the same guidance for simple run-to-completion type console apps. You can write a simple Main() method and do your stuff, but how do you get the advantage of the amazing configuration and dependency injection that you get out of the box with hosted apps? Surely, you could set up all that machinery and maybe create an IHostedService implementation just to get going. Even then, you are still left with a hosted app that you have to deal with shutting down after your logic is done.

An AWS Primer for Azure Developers

Even though AWS has been around for much longer, as is the norm for a lot of people coming from the .NET/Microsoft side of things, my cloud experience started with Azure. I got into AWS when I was a few years into Azure. I remember thinking at that point it would be nice to have something like this primer that would give me a very high-level introduction to AWS based on what I knew of Azure. So here it is.

The Rise of Go

Recently, Go has seen a real uptick in popularity and adoption for a variety of different usages. It has been around for a while and has been continually improving. The purpose-built simplicity and extra focus on making concurrency easy and safe is part of it. The other part I like is the ease with which what you write becomes portable. These aspects especially make it a good fit to write infrastructure and tooling.

Revisiting Kubernetes vs. Service Fabric

Since I wrote my initial post regarding Kubernetes and Service Fabric, a few things have happened:

  • Kubernetes had a chance to mature a lot more and also, needless to say, has sky-rocketed in adoption.
  • Managed Kubernetes on the major cloud providers (AKS/EKS/GKE) has had a chance to mature a lot more.
  • Adoption of Service Fabric is miniscule in comparison.
  • Microsoft itself seems to be putting (wisely so) much of its firepower behind Kubernetes while Service Fabric sort-of just sits there on the side.
  • The successor to Service Fabric (i.e. Service Fabric Mesh) - is going to be container-driven.

Specifically, in terms of where Microsoft is putting its money, I think that got brought home at Ignite 2019. You only need to sit through the major keynotes and peruse the sessions to figure out that as far as these kinds of platforms are concerned, Kubernetes has “won the day”. All things being equal, my suggestion would be to adopt Kubernetes and avoid Service Fabric. If you are starting out, this means making sure you pick a technology that is not bound to any specific OS platform (sure, Kubernetes can run Windows workloads, but it will be a while before it gets parity with Linux if it ever does). If you’re already invested in Service Fabric, put a migration plan in place to move away.

Working with Windows Containers in Kubernetes

Even though Docker was built atop Linux containers and that is the majority of Docker usage out there, Windows Containers have been a thing for a while now. They went mainstream in 2016, and one hopes “ready for primetime” with Windows Server 2019. Even though integration with Docker is getting tighter, if you are in the unfortunate position of having to use Windows Containers with Kubernetes, you are going to have issues.

Installing PFX Certificates in Docker Containers

Recently, I came across having to install PKCS12 certificate bundles (i.e. a PFX file with the certificate and private key included, protected with a password) on a Docker container. This is standard fare on normal Windows machines or on PaaS systems such as Azure App Service. Doing this on a container, though, proved to be tricky (perhaps with good reason as I mention later) - so tricky that I ended up writing my own tool to do it. I have written this up in case you have similar needs and are working with .NET Core.

Running .NET Core Global Tools Without the SDK

.NET Core Global Tools are pretty neat. If you are targetting developers with the .NET Core SDK installed on their machines and need to ship CLI tools, your job is made immensely easier. It is just as easy as shipping a NuGet package. However, once you get used to building these things, it is easy to fall into the trap of treating this shipping mechanism as if it were Chocolatey (or apt-get, or yum, or what-have-you). It is certainly not that. The process of installing and upgrading your tools are handled by the .NET Core SDK - which alleviates you from having to create a self-contained package if you were shipping a ready-to-go tool - and this makes sense - global tools are a developer-targetted thing. You’re not supposed to use it to distribute end-user applications.

Azure DevOps for CI and CD

I set up CI and CD for two of my applications using Azure DevOps. It was quite easy. Setting up the build pipeline is as simple as including a YAML file in your source repository. It then just comes down to knowing how the build YAML schema works. As far as the release (or deployment) pipeline is concerned, though, I could not find a similar method. I had to set it up through the Azure DevOps UI. I don’t know if there is some information I am missing, but that would seem to somewhat go against the DevOps principle - you know - infrastructure as code and all.

On Service Fabric, Kubernetes and Docker

UPDATE (Nov 13, 2019) My views on this have changed since I wrote this post. See this post for where I stand now.

Let us get Docker out of the way first. Microservices and containers are quite the hype these days. With hype comes misinformation and hysteria. A lot of people conflate the two (fortunately there are wise people out there to set us all straight). If you have done your due diligence and decided to go with microservices, you don’t have to go with containers. In fact, one would argue that using containers for production might be a good crutch for applications that have too many tentacles and there is no appetite to port them or rewrite them to be “portable”. Containers do have other good use cases too. Docker being the leading container format (although starting to face some competition from rkt these days), all in all I am glad containers exist and I am glad that Docker exists. Just be aware of the fact that what you think you must use may not be what you need at all.

Reader-Writer Locking with Async-Await

Consider this another pitfall warning. If you are a frequent user of reader/writer locking (via the ReaderWriterLockSlim class) like I am, you will undoubtedly run into this situation. As more and more code we write these days are asynchronous with the use of async/await, it is easy to end up in the following situation (an oversimplification, but just imagine write locks in there as well):

async Task MyMethod()
{
	...
	myReaderWriterLockSlim.EnterReadLock();
	var thing = await ReadThingAsync();
	... 
	myReaderWriterLockSlim.ExitReadLock(); // This guy will choke.
}

This, of course, will not work. This is because reader/writer locks, at least the implementation in .NET, are thread-affine. This means the very same thread that acquired a lock must be the one to release it. As soon as you hit an await, you have dispatched the rest of the behavior to some other thread. So this cannot work.

Listor- Showcasing React and .NET Core

For both React and for .NET Core (specifically ASP.NET Core and Entity Framework Core), I got sick of playing around with little prototypes and decided to build an application. Listor is my first proper application I have built using both these technologies. It is a simple list-maker application- nothing fancy. But I have been using it since I put it up and it has come in handy quite a bit.

I am quite impressed with .NET Core (or I should say “the new .NET” - to mean not just the .NET Core runtime, but .NET Standard, the new project system, ASP.NET Core, EF Core, and let’s say even the latest language improvements to C#). So much so, that it is going to suck a bit going back to writing traditional .NET Framework stuff for my day job.

An Azure Service Fabric Restarter in F#

Trying to get beyond just writing quick scripts here and there in F#, I went through functional design patterns targeted at building mainstream applications. Railway-oriented programming specifically stuck with me. I decided to try it along with some of the other core functional concepts such as projecting to other domains with map and bind operations. My first foray into this was applying it to, surprise, surprise, yet another quick script I had in place. This one was something I had put together already using F# to recycle all code packages for a given application running on Azure Service Fabric.

OAuth2 and OpenID Connect versus WS and SAML

I have mentioned how part of our replatforming project that saw us move to Azure was moving the security protocol from WS-Federation/WS-Trust to OAuth2 and OpenID Connect. I kept running into rumblings on the internet about how even though it was widely adopted, OAuth2/OpenID Connect were somehow less secure. Comparing a secure implementation of both side by side, I did not really see how this could be. Since our industry is not short on oversimplification and grand proclamations, I decided to pose this question to experts in the field.

Moving to Azure PaaS and Service Fabric- Part 2

This is Part 2 of a two-part blog series:

  • Part 1 (Application- Services, Security and UI)
  • Part 2 (this one; Database, Configuration, Logging, Caching, Service Bus, Emails, Tooling, Rollout)

Database

We moved from our on-premises installation of SQL Server to the PaaS offering that is SQL on Azure. Other than the actual physical moving of the data, the additional challenge we had was that our system had a number of separate databases that were interconnected via synonyms. Since each SQL Database is an independent resource on Azure, this would not be possible without introducing external data sources which would still be performance prohibitive. We therefore had to remove the synonyms and rework some of our code to account for this. We opted to go with an Elastic Pool that was associated with all our databases. We also configured geo-replication for redundancy.

Moving to Azure PaaS and Service Fabric- Part 1

This is Part 1 of a two-part blog series:

  • Part 1 (this one; Application- Services, Security and UI)
  • Part 2 (Database, Configuration, Logging, Caching, Service Bus, Emails, Tooling, Rollout)

It has been an action-packed year at work. We moved our entire platform in one fell swoop from on-premises to Azure PaaS (Platform as a Service). Since this was a big re-platforming effort that would incur regression testing across the entire set of applications, we took this opportunity to include a few technology upgrades in the process. All in all, it was a daunting task and took quite a bit of research and preparation before the actual implementation could be done. I think it is worth it to highlight some of the key achievements. The move entailed the following key aspects:

The New Way Forward for SPAs with Angular and React

Having worked with Angular 1.x for some time and having liked it quite a lot (I guess that one we’re supposed to call AngularJS, and the new one is just Angular - yes, that is not confusing at all, is it?), I must say I was quite spooked when I first saw the documentation for the new Angular. It indeed is a completely different framework. There is no easy migration path from AngularJS short of a rewrite, at which point you might as well evaluate all your options including React.

Fiddling with F#

I have always had a keen interest in functional programming. While I still shy away from going completely functional for full-blown applications, I try to use the tenets of functional programming as much as I can even when writing C#. This is made much easier by the fact that C# has borrowed a lot of functional programming features as it has evolved. With each new version of the language, I find my code getting more concise and more expressive mostly owing to these features. That said, if you are looking for a functional-first experience, nothing beats a functional language. I like F# as it belongs to the .NET ecosystem but is derived from OCaml which itself is quite elegant.

Writing a WS-Federation Based STS using WIF

Even though SAML and WS-* have started to be looked upon as the old guard of security protocols with the popularity of OAuth 2, they are not without their merits. For one, they are inherently more secure than OAuth (in fact, you need to rely on a separate underlying secure transport for OAuth to be considered secure- and if you are someone who believes SSL is broken, then OAuth is practically insecure). It just so happens that their demerits are very visible to someone trying to implement or integrate them. OAuth, by contrast, is much simpler and easier to implement- and for most purposes, secure enough. If you have settled on WS-Federation as your protocol of choice, Windows Identity Foundation (WIF) is most likely going to be your de-facto choice. While powerful, WIF as a library is not what one would call “easy to use”. If it’s cumbersome when you use it as a relying party, the complexity is ten-fold if you try to build a security token service (STS) based on it.

Git- Rewriter of History

Undoubtedly one of the biggest advantages that Git provides is using rebasing to maintain a clean commit history. I find that I am using it a lot these days- primarily in three modes:

  • As part of pull (i.e. git pull -rebase)
  • Interactive rebase to: 1) keep my own history clean when I am off working on a branch by myself, and 2) clean up a feature branch’s commit history before merging it into the mainstream
  • Rebase my branch against a more mainstream branch before I merge onto it (i.e. git rebase mainstream-branch)

With interactive rebase, usually what I do is- I will have one initial commit that describes in general the feature I am working on. It will then be followed by a whole bunch of commits that are advancements of or adjustments to that - quick and dirty ones with “WIP (i.e. work in progress) as the message. If, in the middle of this, I switch to some other significant area, then I will add another commit with a more verbose message, and then again it’s “WIP, “WIP, and so on. I will add any thing I need to qualify the “WIP with if necessary (e.g. if the “WIP is for a different context than the last few WIPs, or if the WIP does indeed add some more information to the initial commit). In any case, after some time, I will end up with a history that looks a bit like this (in chronological order):

Beware of this WCF Serialization Pitfall

Ideally, one should avoid data contracts with complex graphs- especially with repeated references and definitely ones with circular references. Those can make your payload explode on serialization. With repeated references, you may run into an integrity issue on deserialization. With circular references, the serialization will enter a recursive loop and you will probably run into a stack overflow.

Seeing that in certain situations, this becomes unavoidable, WCF has a way that you can tell it to preserve object references during serialization. You do this by setting IsReference to true on the DataContract attribute that you use to decorate the composite type that is your data contract.

Using CSS Media Queries for Responsive UI Design

Using something like Bootstrap for a responsive UI covers most of the bases. But if you need more control, it’s a good idea to get familiar with Media Queries in CSS. It might come in handy some time, plus that is what Bootstrap uses under the hood as well, and it never hurts to learn how the tools you use work. The Mozilla page on media queries goes into just the right amount of detail and gives you a good outline of everything you can do with it.

Diagnosing MEF Composition Errors

For all its goodness, if something goes wrong, problems with MEF are terribly hard to diagnose. Thankfully, there’s an article out there by Daniel Plaisted at Microsoft that goes into great detail into all the things that can go wrong with MEF and how to get to the bottom of each one. I have it bookmarked, and if you work a lot with MEF, you should too. The one area that I find most useful, though, is figuring out composition-time errors using tracing.

Two Types of Domain Events

You can find a good primer on domain events in this post by Udi Dahan. There are some issues with his approach, though that Jimmy Bogard raises and addresses in his post. However, I was left with two questions:

  1. Shouldn’t the domain event be dispatched/handled only when the transaction or the unit-of-work commits? Because whatever changes have been made to the state of the domain isn’t really permanent until that happens.
  2. There may be cases when domain events need to trigger changes to other domain objects in the same bounded context - and all of that needs to be persisted transactionally. In other words, in this scenario - it makes sense to have the event be dispatched just before the transaction commits. However, in this case, whatever ends up handling that event also needs access to the current transaction or unit-of-work that is in play - so that all the changes make it to persistence in one fell swoop of a commit.

That leads me to conclude that there are really two types of domain events that need to be handled differently. The first type as listed above would either be infrastructure-y things like sending out e-mails and such, or sending messages to other bounded contexts or external systems. The second type would be within the same bounded context but maintain certain kinds of relations within the domain that could not be modeled within the same aggregate (simply put, they take the place of database triggers in the DDD metaphor).

DDD, meet SOA

There is a lot of discussion online around whether DDD and SOA can co-exist, and if so, what that looks like. I am of the opinion that they can co-exist and have arrived at a model that seems to work for me. Consider a complex DDD system with several bounded contexts and contrast it to an SOA system - and I am including the flavor of SOA that I describe in this post.

A Method for Service-Oriented Architecture (SOA)

When you adopt service oriented architecture (SOA), the most important aspect of your architecture and high-level design step when building a new system is obviously decomposition of the system into the right services. A prudent way to decompose a system into services is to first identity what parts of the system is more likely to change more frequently. Thus, you decompose by volatility and set up dependencies such that you always have more volatile services calling less volatile services. Within the same level of volatility, of course, you would further decompose services by function if needed.

Getting on the Domain-Driven Design Bandwagon

Domain driven design has been around for quite a while. I believe the definitive book on it by Eric Evans came out first in 2004. For whatever reason, I had not been exposed to it in places I worked. I had been hearing about it for enough time and from enough smart people to give it a try. I researched it online a bit and went through quite a few articles. Especially, the set of articles on DDD by Jimmy Bogard (Los Techies) was quite helpful. Finally, I ended up buying Evans’ book and reading it cover to cover.

Oatmeal, it's what's for breakfast!

When I am in my healthy and fit zone, I get a lot done. It is quite clear there is no big secret to keeping fit - you exercise and you eat well. To that end, few would disagree that oatmeal is as healthy as they come when it comes to breakfast foods. Few would also disagree that making rolled oats is not a fun activity and eating rolled oats even less so. After about a year and a half of trial and error, I have settled on an oatmeal recipe that is easy and fast to make, decent enough to eat, and a good balance of nutrients. Most importantly, it fills you up and sets your body on the right track for the day - which is what a good breakfast should do.

An Easy Service Proxy Executor for WCF

If you have adopted service oriented architecture (SOA) and are using WCF as the hosting/communication mechanism for your internal services, chances are you are doing one of two things: you publish each service like any old WCF service and your other services which are consumers of said published service consume it through its WSDL; or you create shared libraries that include the contract information that both the service and its consumer reference. Both are somewhat cumbersome but can be managed. If all your services are internal, though, going the WSDL route is somewhat of an unnecessary overhead and is just a bit more unmanageable.

Bye, Bye, TypeScript, for now

As much as I raved about TypeScript in this post from some time ago, sadly the time has come for me to part with it - at least for now. It is a beautiful piece of work by a beyond-brilliant group of people. As I worked more and more with JavaScript the past year, though, I realized a few things.

The first, and this I already mentioned in my previous post, is that it is still maturing and is not quite there yet. One of my pain points was the lack of object initializers that, in my opinion, took away some of the expressiveness of JavaScript. However, as I now look at it, it is the whole idea of trying to hide the fact that everything in JavaScript is a hash-map. Thus, you can and should be able to create an object or assign an object on the fly using JSON notation. As soon as you introduce TypeScript annotations into the mix, this goes away. The best of both worlds here would be if I could have it annotated and still be able to assign or initialize using JSON (and have the JSON be validated based on the annotation).

Bootstrap Modal With AngularJS

We’ll look at a relatively low hanging fruit in case you’re working with vanilla AngularJS and Twitter Bootstrap and are not relying on other add-ons such as AngularUI’s Bootstrap extension. One common need I have is to be able to show or hide Bootstrap modals based on a property on my view-model. Here’s a simplified view of the controller:

var app = angular.module('app', ...);
...

app.controller('ctrl', function ($scope, ...) {
	...
	$scope.showModal = false;
	...
});

And here is the HTML:

Writing Your Own LINQ Provider- Part 4

This is the last in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics
  3. A simple, pointless solution
  4. A tiny ORM of our own (this post)

A tiny ORM of our own

In the previous post, we took a look at a simple, albeit pointless example of a LINQ provider. We wrap the series up this time by looking at something a little less pointless - a LINQ-based ORM, albeit a very rudimentary one. As with the previous one, it helps to take a look at the source code first:

Writing Your Own LINQ Provider- Part 3

This is the third in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics
  3. A simple, pointless solution (this post)
  4. A tiny ORM of our own

A simple, pointless solution

In the previous post, we took a look at what happens when you call LINQ methods on IQueryable<T>, and how you can use that to build your own provider. We take that a step further this time by building an actual provider - albeit a somewhat pointless one, in that it adds LINQ support to something that doesn’t really need it. The point, though, is to keep it simple and try to understand how the process works.

Writing Your Own LINQ Provider- Part 2

This is the second in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics (this post)
  3. A simple, pointless solution
  4. A tiny ORM of our own

Provider Basics

In the previous post, we took a look at the two flavors of LINQ methods, i.e. the methods and classes around IEnumerable<T> and the methods and classes around IQueryable<T>. In this post, we expand upon what happens when you call LINQ methods on IQueryable<T>, and how you can use that to build your own provider.

Writing Your Own LINQ Provider- Part 1

This is the first in a short series of posts on writing your own LINQ provider. While LINQ is the best thing that ever happened to .NET, and using it is so much fun and makes life so much easier, writing your own LINQ provider is “complicated” to say the least (context- the LINQ interface to NHibernate, RavenDB or Lucene - those are all providers).

A quick outline of the series:

Conditional JQuery Datepicker With AngularJS

I just find AngularJS directives so much fun. It is so satisfying to see them work. Recently, I came across a requirement where I had a text field bound to a property that could either be a date or text. I had a way of knowing which one it was, but if the property was a date, the text field would need to become a datepicker, and turn back into a normal text field if not. And no, not an HTML5 date control - an old-timy jQuery datepicker.

Getting Functional With Perhaps

Ever since the introduction of LINQ, people have been trying all sorts of clever ways to get more functional constructs into C# to wrap away certain annoying procedural details that are part of the language because of its non-functional beginnings. One of the most annoying class of operations in this context are the TryX methods (e.g. TryGetValue, TryParse and so on) that use out parameters and force you to break into statements what is otherwise a fluent sequence of calls.

Code Generation And Aspect Orientation

TL;DR

The CodeDOM is a cool library within .NET that can be used for structured code generation and compilation. When combined with Reflection, one neat application is to be able to inject aspects into your code at run-time. I have created Aspects for .NET, a library that does just that, and also tries to bring AOP to MEF.

Whether it be generating boilerplate, generating proxy classes or processing DSL, code generation has numerous applications. There are a few different options for code generation in .NET:

Reader Writer Locking In .NET

Quite often people turn to the lock statement when protecting access to shared resources from multiple threads. A lot of times, though, this is too big of a hammer. This is because to maintain the integrity of the lock, any access of the protected resource, be it simply accessing its value or modifying it, needs to be done within the lock. This means even concurrent reads get serialized. A lot of times, what you need is for concurrent reads to be possible as long as they read a consistent value, while writes are serialized.

RudyMQ- A Rudimentary Message Queue for Windows

For some odd reason out of the blue, I got this hankering to build a message queue (albeit rudimentary - hence the name) from scratch. I’ve been working with MSMQ for a while now, mostly as a transport for WCF. As cool as it is, it can really get on your nerves at times. It is an enterprise grade product, after all, which means there are a lot of dials you can turn. If something is not right, you’ll get an error. If your experience has been the same as mine, you will recognize the dreaded insufficient resources error that MSMQ gives you for almost any of a thousand things that can go wrong.

Dancing with Responsive Design

I have been hearing about responsive design on and off for some time now, and it has always appealed to me as a pattern to follow for web-based user interfaces. CSS3 is obviously quite powerful and media queries provide a relatively easy way to build one unified UI that looks great on PCs, but then adapts and shape-shifts accordingly when viewed on a smartphone or tablet without having to completely re-implement a “mobile site” as so many do today. Since UI design is not my core area, though, I never could quite gather the energy to do something with it. Then I saw support for responsiveness in the new Bootstrap 3. Like with all other aspects of web UI design, it makes responsiveness that much easier as well. As added motivation, I tried out my To Do application in my smartphone - and it looked awful.

On Rolling Your Own

Within the context of software development, the phrase “rolling your own” usually has a bad smell attached to it. Most of the time, this is with good reason. If you are building a fairly complex system for a business and there is ROI at stake, it surely makes sense to at least assess what is readily available in the industry and is used and thus “certified” by the community for certain components before jumping in and building it oneself (however fun that may be). On the extreme end of this, you certainly don’t want to roll your own operating system or database (unless that is at the core of what you’re doing - in which case, of course, you do).

Yes, One More To-Do Application

UPDATE (2015/1/7)

This application has been re-written from scratch using some new stuff I learned. The new application is called CHORE. Understandably, the links to the application and source code as mentioned in the original post don’t work anymore. I did not update them as I want the original post to stand as is. Here, however, are relevant links to the new application:

The application is hosted here.

The source code can be found here.

Finance Manager

A bunch of things I’ve been working on and have blogged about have culminated into an actual application that uses all of it. Finance Manager is a SPA web application I built for my own use to keep track of my finances. This application allows me to create periodic budgets and record all my transactions. I can then look at my budgeted versus actual amounts based on those transactions. This was already something I was doing with Excel spreadsheet. I took what I was doing and created a domain model out of it, and built this application around it. It is not quite feature complete and not deployed anywhere yet, but I have open sourced the code.

Modeling, DSL and T4- Ramblings

UPDATE (2015/01/10)

There have been changes in my thoughts about how one should go about this. Consequently, I have abandoned the modeling library that I speak of in this blog post. Understandably, the link to which that points no longer works.

I absolutely loathe writing repetitive code. That is what machines are for. My philosophy therefore is to try to generate as much of these kinds of code as possible. With .NET, T4 gives you a pretty neat code generation mechanism for generating code that follows a given pattern (the first example that comes to mind are POCOs from a domain model). If you think about it though, most multi-tier enterprise type applications have quite a bit of code that can be generated and that derives from the original domain model. How nice would it be to be able to generate a huge chunk of the codebase so that you only have to think about, write and test what you absolutely need to? I guess Entity Framework does some of it for you if you’re into it. If you don’t like the heavy-handedness of it (like me), you could opt for keeping your model as an EDMX file but then writing a bunch of T4 around it to generate various layers of code based on it.

TypeScript, AngularJS and Bootstrap- The Killer Combo

After having recently used this combination, I am in love with it. For those unfamiliar with any of these, here are quick one liners:

TypeScript is a JavaScript superset from Microsoft that compiles down to JavaScript and adds static typing and other neat features like classes and what not. It is worth mentioning that none other than Anders Hejlsberg (the father of C#) was involved in its development.

AngularJS is a JavaScript framework geared towards building rich and testable single-page applications. This one comes from none other than Google.

Providers for the Commons Library

A blog series on my Commons Library would not be complete without mentioning all the providers that go with it. The Commons Library, by itself, gives you a framework, some common functionality and a bunch of contracts. To get actual functionality out of it, providers need to be built that implement those contracts. The Commons Library does contain a bunch of built-in providers as well. These built-in providers are ones that do not have any third-party dependency other than the .NET framework and the most common of its extensions; the idea being I do not want to impose a whole bunch of dependencies on the Commons Library itself. Other than these built-in providers, I have built other providers that do have third party dependencies. These are individual libraries that are available as NuGet packages.

Data Access in the Commons Library

The data access block in the Commons Library is based on the Unit of Work and Repository patterns - or at least my take on them.

Unit of Work

You start with a unit-of-work factory (which is an implementation if IUnitOfWorkFactory) and call Create on it to get an instance of IUnitOfWork which is also IDisposable. So, you get a unit-of-work and do your business inside a using block, and call Commit before you leave the block. Methods in IUnitOfWork are all based on working with a specific entity type (which is just a POCO with an identifier field) and uses LINQ and IQueryable - makes it easy to use as a consumer, but also makes it easy to implement providers as most providers worth their salt already have a LINQ interface. As of the time of this writing, I’ve written two providers - one based on Fluent NHibernate and one based on MongoDB.

Logging in the Commons Library

My major goals when building the logging block for the Commons Library were to keep the logging interface simple (just tell me what level, I will give you the message to log - don’t make me think too much), be able to log to multiple places (i.e. logging providers), and for the logging operation itself to be asynchronous (i.e. the only latency any logging should add is a memory operation).

With that, I believe what is now in place meets all of these. You get a MEF imported instance of IAppLogger which has simple Info, Error, Warning, Verbose methods that you can use to log messages or exceptions. Everything you log goes into a queue. When you initialize an application, a log queue thread is started which processes the queue, handles all common logging stuff (i.e. figure out what the configured logging level is and whether this entry should be logged at all based on that, construct a LogEntry object with all the information needed for each individual provider to do its thing, etc.), and dispatches the entry to all configured logging providers.

Configuration in the Commons Library

When working on the configuration block in the Commons Library, I started out wanting to decouple the storage of configuration data and the format of that data from the actual configuration interface used by consumers to retrieve that data. I wanted consumers to be able to simply look up configuration data through a dictionary-style interface while the job of parsing the original format would be done by a configuration formatting provider and the job of getting that data from wherever would be done by a configuration store provider. Eventually, I settled on just following the .NET System.Configuration style XML format- as it is somewhat of a standard now, with a lot of other library builders also using it for their configuration needs. Besides, you diverge from this format and then you have to start rolling your own for tedious things like WCF configuration or diagnostics and tracing configuration - definitely a rat-hole I did not want to go down.

Handling Duplicate Libraries with MEF

While building the composition/DI piece for the Commons Library, one problem I ran into was the fact that if you told MEF to load assemblies from a number of different places - and they all had copies of the same library (which is possible especially with common dependencies), MEF would load the exports in each assembly as many times as it finds it. What you end up with then is a whole bunch of matching exports for a contract that you expect only one of.

MEF for everything!

In the first of a series of blogs around my Commons Library, I want to shed more light on my choice of MEF as the underlying mechanism for the AK.Commons.Composition namespace - which handles dependency injection as well as extensibility or plugin type stuff. I like its attribute based syntax, choice of different types of catalogs and dynamic discovery (and yes, I am not using dynamic discovery just yet but I intend to; the same goes for taking advantage of different types of catalogs). The following three features, however, stood out for me:

The Commons Library

A commons library is something I’ve always tried to maintain. The idea is you have something of your own that handles (or at least provides a way to handle) common cross-cutting concerns in all applications that you build. This includes areas such as configuration, logging, security, error handling, data access, dependency injection and caching to name a few. As long as it is kept up to date, it is also a good way to keep up to date with new technologies in these areas. My last attempt at a commons library was during my .NET 2 days - and it worked pretty well for applications that I built back then. I realized recently that I hadn’t really kept it up to date and as a result was not using it. So, I decided to scrap it and build something from scratch that would take advantage of the latest and the greatest that’s out there (that being .NET 4.5 as of now).

Introduction to NodeJS

I just ran into some presentation material from a Node.js introduction presentation I had done in a session that shall remain unnamed. I thought it would be a good idea to put it out there in case someone is starting out with Node, should they stumble on here. So, here it is.

To quote Wikipedia:

  • A software system designed for writing scalable internet applications, notably web servers.
  • Programs are written in JavaScript, using event-driven, asynchronous I/O to minimize overhead and maximize scalability.
  • Consists of Google’s V8 JavaScript engine plus several built-in libraries.
  • Created by Ryan Dahl starting in 2009; growth sponsored by Joyent, his employer.

The V8 JavaScript Engine is part of why Chrome is so fast. It compiles JavaScript to native code. It has a “stop-the-world” garbage collector that makes it more suitable for non-interactive applications. In Dahl’s words, V8 is a “beast” with all sorts of features (such as debugging). Node.js is a JavaScript library (compare to JRE or .NET framework) on top of the V8 JS Engine (compare to JVM or .NET CLR).

Automatic Resource Management in C#

Both the .NET framework and Java are garbage-collected systems. This means that when you instantiate objects, the framework keeps track of how the object is being referenced, and automatically frees up memory used by the object when it is no longer referenced by anything and is “out of scope”. This works beautifully with objects that are part of the framework. In .NET lingo, these are called “managed resources”. However, a lot of times, a .NET or Java application needs to talk to other systems external to the framework – such as databases, file systems, network sockets, graphics engines, and so on – i.e. “unmanaged resources”. In such cases, it is up to the programmer to handle allocation and de-allocation of resources. Framework classes that provide access to such resources will usually provide routines to close or dispose of resources. However, the programmer still needs to write boilerplate in order to do it and the boilerplate usually becomes cumbersome when you take into account things like exception handling.

PASS Summit 2011 Notes

PASS (Professional Association for SQL Server) is “an independent, user-led, not-for-profit organization co-founded by Microsoft and CA in 1999. PASS Summit is the world’s largest, most-focused, and most-intensive conference for Microsoft SQL Server and BI professionals.” (Source: the PASS website at sqlpass.org). The summit is held every year in Seattle. This year, the summit was held from October 11 to October 14 and focused on the upcoming RTM launch of SQL Server Denali, now re-branded as SQL Server 2012. The RTM will be released in the first half of 2012. Microsoft claims it is the biggest release of the SQL Server suite they have done so far with hundreds of enhancements. Some are general such as performance enhancements for the RDBMS as well as SSAS and SSIS. Another notable enhancement is that the cloud version of SQL Server (SQL Azure) is now built using the same codebase as the on-premise version. This write-up outlines some of the other enhancements that were highlighted at the summit.

Where the Crow Came to Die

I was running as fast as I could, but the yak kept gaining on me. All of a sudden, I found myself in my lodge room. I jumped on the bed, only to discover to my horror that the yak had made it to the room as well. With all its fury, the yak jumped on me. That is when I woke up from the vivid nightmare with cold sweat all over my body. Perfectly horizontal on the bed, my head felt fine. I couldn’t put a finger on it, but I knew that there was something that was not right with my body. I decided to get up for a quick trip to the latrine. As soon as I left the bed, the entire room started spinning around me. I could hardly stand or walk straight. That is when the troubling night began.