Showing posts with label Software Architecture. Show all posts
Showing posts with label Software Architecture. Show all posts

Friday, 7 August 2015

What Is "Good" Object Oriented Programming?

Introduction

The term Object-Oriented Programming (OOP) is so ubiquitous in modern software development that it has become a buzzword, appearing on every software engineer's résumé by default with little consideration for what it means to be good at it.

So what does it mean to be good at it?

Defining "Good" OOP

Providing a general definition of OOP is relatively easy:

"Instead of a procedural list of actions, object-oriented programming is modeled around objects that interact with each other. Classes generate objects and define their structure, like a blueprint. The objects interact with each other to carry out the intent of the computer program" (Wikipedia)

By contrast, a concrete definition of what makes for good OOP is tough to capture so concisely. Good OOP is defined by a small collection of detailed design principles that includes SOLID, Cohesion and Coupling and ultimately leads to the maximum flexibility, understanding and reusability of code possible.

In this post I'll be running through a real-world-like scenario and explaining how these principles make for good OOP along the way.

A Naïve Approach to OOP

I'll use the example of an ExchangeRate object that we'll want to validate, store in and retrieve from a database. It's common for a developer who's new to object-oriented programming to define such a class somewhat like the following:

public class ExchangeRate
{
    private const string ConnectionString = "...";

    public int ID { get; set; }
    public string FromCurrency { get; set; }
    public string ToCurrency { get; set; }
    public double Ratio { get; set; }

    public void Save()        
    {
        using (var conn = new SqlConnection(ConnectionString))
        {
            // TODO: write properties to table
        }
    }

    public static ExchangeRate Load(int id)
    {
        using (var conn = new SqlConnection(ConnectionString))
        {
            // TODO: Read values from table
            return new ExchangeRate(value1, value2, ...);
        }
    }

    public static bool Validate(ExchangeRate ex)
    {
        // Validate currency codes
        // Validate that ratio > 0
    }
}

The beginner designs the class this way because all of the methods and properties on the object feel like they belong together. Grouping code together based on their logical relation to the same thing like this is known as logical cohesion and while it works perfectly well at this scale, logical cohesion quickly has its downfalls.

Here's a rundown of the key problems associated with the class as it is currently designed:

  1. The class will become unmaintainably bloated as we add more and more functionality that relates to ExchangeRate.
  2. This bloat is compounded by the fact that if we later decide to allow load/save from other locations than a SQL database or to validate exchange rates differently depending on varying factors, we'll have to add more and more code to the class to do so.
  3. If the consumer of ExchangeRate doesn't use the SQL related methods, SQL related references are still carted around.
  4. We'll have to duplicate generic load/save code for every object we create beyond ExchangeRate. If there's a bug in that code, we'll have to fix it in every location too (which is why code duplication is bad news).
  5. We're forced into opening a separate connection for every save or load operation, when it might be beneficial to keep a single connection open for the duration of a collection of operations.
  6. ExchangeRate's methods can't be tested without a database because they're tightly coupled to the database implementation.
  7. Anything that wants to load an ExchangeRate will be tightly coupled to the static ExchangeRate.Load(...) method, meaning we'll have to manually change all those references to other load methods if we want to load from a different location at a later date. It also means that those referencers can't be tested without a database either!

Improving OOP Using SOLID

The principles of SOLID give a framework for building robust, future-proof code. The 'S' (Single Responsibility Principle) and the 'D' (Dependency Inversion Principle) are great places to start and yield the biggest benefits at this stage of development.

The dependency inversion principle can be a hard one to grasp at first but is simply that wherever our class tightly couples itself to another class using a direct reference to its Type (i.e. the new keyword or calls to static methods), we should instead find some other way of giving our class an instance of that Type referenced by it's most abstract interface that we need, thereby decoupling our class from specific implementations. This will become clearer as the example progresses.

The single responsibility principle is exactly what you'd expect it to be, that each object should have just one responsibility.

Here's all the responsibilities that the ExchangeRate class currently has:

  1. Hold information representing an exchange rate
  2. Save exchange rate information to a database
  3. Load exchange rate information from a database
  4. Create ExchangeRate instances from loaded information
  5. Validate exchange rate information

Since these are separate responsibilities, there should be a separate class for each.

Here's a quick pass of refactoring ExchangeRate according to these two principles:

public class ExchangeRate
{
    public int ID { get; set; }
    public string FromCurrency { get; set; }
    public string ToCurrency { get; set; }
    public double Ratio { get; set; }
}

public class ExchangeRateSaver
{
    public void Save(Connection conn, ExchangeRate ex)        
    {
        // TODO: write properties to table
    }
}

public interface IExchangeRateFactory
{
    ExchangeRate Create(string from, ...);
}

public class ExchangeRateFactory : IExchangeRateFactory
{
    public ExchangeRate Create(string from, ...)
    {
        return new ExchangeRate(from, to, rate);
    }
}

public class ExchangeRateLoader
{
    private readonly IExchangeRateFactory _factory;

    public ExchangeRateLoader(IExchangeRateFactory factory)
    {
        _factory = factory;
    }

    public ExchangeRate Load(Connection connection, int id)
    {
        // TODO: Read values from table
        return _factory.Create(value1, value2, value3);
    }
}

public class ExchangeRateValidator
{
    public bool Validate(ExchangeRate ex)
    {
        // Validate currency codes
        // Validate that ratio > 0
    }
}

Code grouped in this manner is described as being functionally cohesive. Functional cohesion is considered by many to lead to the most reusable, flexible and maintainable code.

By breaking the code down into separate classes, each with a single responsibility, we have grouped the code by its functional relationships instead of its logical ones. Consuming code and tests can now swap in and out individual chunks of isolated functionality as needed, instead of carting around one monolithic, catch-all class.

Additionally, by inverting the dependencies of ExchangeRateLoader and ExchangeRateSaver, we have improved the testability of the code as well as allowing for any type of connection to be used, not just a SQL one. The benefits of dependency inversion are compounded as more and more classes become involved in a project.

What about the "OLI" in "SOLID"?

The 'O' (Open/Closed Principle) and 'L' (Liskov Substitution Principle) aren't applicable to this example as they relate to revisiting existing production code and to inheritance, respectively.

The 'I' (Interface Segregation Principle) states that no client should be forced to depend on methods it does not use and, for the most part in this example, has been covered by adhering to the Single Responsibility Principle.

If you'd like to see an example of situations when the outcome of applying the ISP and SRP differ, or an example of applying the Open/Close and Liskov substitution principles, let me know in the comments.

In Closing

Hopefully this article has begun to shed some light on how "good" object-oriented code is achieved and how it leads to more flexible, testable and future-proof code.

If you'd like for me to expand on any specific points, or cover how this becomes ever more important as the scale of a project grows, let me know in the comments and I'll do a follow up post!

Friday, 13 February 2015

Liquid for C#: Defining Success

Introduction

In the High-Level Overview for this series I mentioned that I'll need a way to measure the success of this project as it progresses.

As this project's aim is to create a one for one replica of the Ruby implementation of Liquid's behaviour, I will be porting Liquid's integration tests to C# and following a test driven approach.

What's in a Name?

Though they have been called integration tests in Liquid's code, the majority of these tests are in fact functional acceptance tests, which is what makes them useful for confirming that the behaviour of the system is correct.

Unit Test

Tests the behaviour of a single system component in a controlled environment.

Integration Test

Tests the behaviour of major components of a system working together.

Functional Acceptance Test

Tests that the system, per the technical specification, produces the expected output for each given input.

Unit and integration tests verify that the code you've written is doing what it was written to do, while functional acceptance tests verify that the system as a whole, without consideration for the structure of its internal components, does what it is designed to do.

Any Port in a Storm

There are hundreds of tests to port to C# and, as it turns out, not all of the tests in the Ruby implementation's integration namespace are integration or functional acceptance tests... some are unit tests!

The porting process is therefore a matter of replicating the original tests as faithfully as possible, translating them into functional acceptance tests where needed.

A test that ported smoothly

# Ruby
def test_for_with_range
    assert_template_result(
        ' 1  2  3 ',
        '{%for item in (1..3) %} {{item}} {%endfor%}')
end
// C#
public void TestForWithRange()
{
    AssertTemplateResult(
        " 1  2  3 ", 
        "{%for item in (1..3) %} {{item}} {%endfor%}");
}

A test that needed translation

# Ruby - The below are unit tests
#        for methods escape and h
def test_escape
    assert_equal '<strong>', @filters.escape('<strong>')
    assert_equal '<strong>', @filters.h('<strong>')
end
// C# - Rewritten as a test of the 
//      output expected from a template
public void TestEscape()
{
    AssertTemplateResult(
        "&lt;strong&gt;", 
        "{{ '<strong>' | escape }}");
}

When translating from a unit or integration test to a functional acceptance test, I'm using the documentation and wiki as the design specification. This ensures that the tested behaviour is the templating language's expected behaviour, not just the behaviour I expect!

What's Next?

Once all of the tests are ported, the next step will be to start writing the code to pass those tests. Remember, in Test Driven Development we start with failing tests and then write the code to make those tests pass.

The AssertTemplateResult method mentioned earlier currently looks like this:

protected void AssertTemplateResult(
                   string expected, 
                   string source)
{
    // TODO: implement me!
    throw new NotImplementedException();
}

There's still a few hundred more tests to port yet, though, so wish me luck!

Monday, 9 February 2015

Liquid for C#: High-Level Overview

Introduction

In the Liquid For C# series, I will be writing a C# interpretor for the Liquid templating language from scratch.

In this first post I define the project's scope and overall intention. Code does not factor into this stage at all, it's purely about the API's purpose, not it's implementation.

Broad Strokes

The first step in any project is to define what it will be doing at the highest level. Ideally, this should be expressible as a single sentence or a simple diagram.

This project's definition is deceptively simple: Template + Data = Output.

Armed with this very general definition, the next step is to break the overall process into broad, functionally cohesive chunks. I find that this is best achieved by running through potential use cases. The below is the outcome of that process.

It immediately jumps out at me that the Abstract Syntax Tree and steps that follow are implementation agnostic. This means that they are not specific to Liquid and, because of this, can be re-used in any templating language interpretor.

Defining Success

The question then becomes one of how to know when the project fulfils its purpose.

As the aim of this project is to provide a full C# implementation of Liquid's behaviour as it is currently implemented in Ruby, I will port all of the integration tests for Liquid to C# and follow a Test Driven Development approach. I will only consider the project to be a success when it passes all of the original tests.

What Next?

In bigger teams or projects its necessary to delve much deeper in the design phase, going as far as to define the interfaces for the API and how they plug together so that all involved parties can work independently without going off in completely different directions.

Since this is just me working on a hobby project, though, I'll instead be taking a very iterative approach and in the next post I'll be writing code!

Wednesday, 4 February 2015

Restructuring DotLiquid: Part 3

The Issue at Hand

For those who didn't know, DotLiquid is a straight C# port of Liquid, a library written in Ruby.

The Ruby programming language is significantly different to C#, so even best-effort attempts at like-for-like reconstruction of the library inevitably lead to structural issues in the API's design.

Lost in Translation

The structural issues that come from direct porting include:

  • Excessive use of static classes.
  • Excessive use of Reflection.
  • Lack of Object Oriented design, leading to inflexibility.
  • Duplicate code. Tight knit classes force code to be repeated.
  • Excessive boxing and unboxing, leading to degraded performance.

That's not to do down DotLiquid though, which is an exceptional direct port of the original library, as for the majority of cases it is more than fast enough and anyone who has written code using the Ruby implementation of Liquid will be able to pick up DotLiquid and use it in the exact same way without hesitation.

In my quest to produce the perfect API, however, my implementation has become so far removed from DotLiquid's interface, implementation and intent that I have decided to start afresh.

Be sure to come back for my next post, where I'll begin the high level design process for the API including how and why I'll be drawing distinct boundaries between its elements.

Friday, 23 January 2015

Restructuring DotLiquid: Part 2

Bringing down the Hammer

I mentioned in Part 1 that DotLiquid's Condition hierarchy could do with being a bit more object oriented.

As conditions are a relatively small and isolated part of the API, it's a great place to start this series in earnest, so that's where I'll begin.

The Restructure

Here's a before and after of the Condition class hierarchy.

BEFORE

AFTER

First, I introduced a new interface, ICondition, and I did this for two reasons:

  1. Not all future developers will want to use the class ConditionBase as a base - they might have new code requirements or their own base class.
  2. No class that has a dependency on conditions should be forced to depend upon a specific implementation - by using the interface I make those classes compatible with any implementation.

Next, I refactored And and Or logic out of Expression and into their own classes. I did this because the code for And, Or and Expression may be logically cohesive, but it is not functionally cohesive. Incidentally, their code's lack of functional cohesion is what made them so easy to separate.

I made ConditionBase an abstract class to better indicate its purpose as a foundation, as opposed to a class that can be used effectively on its own.

I moved the static collection Operators out of ExpressionCondition and into its own class. This needs further work, as it shouldn't be static at all, but it's a start. More on this in a later post.

The IsElse property is a classic code smell because it will only be true on one occasion: when the Type is ElseCondition. Any logic that utilises the property would be better off inside the ElseCondition itself, thereby encapsulating the functionality, so I changed the signature of the Evaluate method to take a ConditionalStatementState object and moved the check for whether an ElseCondition should render inside ElseCondition.

// BEFORE
// =====================
// The owning block's render method:
var executeElseBlock = true;
foreach (var block in Blocks)
{
    if (block.IsElse)
    {
        if (executeElseBlock)
        {
           return RenderAll(block.Attachment, context, result);
        }
    }
    else if (block.Evaluate(context))
    {
        RenderAll(block.Attachment, context, result);
        executeElseBlock = false;
    }
}

// The ElseCondition's evaluate method:
public override bool Evaluate(Context context)
{
    return true;
}

// AFTER
// =====================
// The owning block's render method:
var state = new ConditionalStatementState(context);
foreach (var block in Blocks)
{
    if (block.Evaluate(state))
    {
        ++state.BlockRenderCount;
        var retCode = block.Render(context, result);
        if (retCode != ReturnCode.Return)
            return retCode;
    }
}

// The ElseCondition's evaluate method:
public override bool Evaluate(ConditionalStatementState state)
{
    return state.BlockRenderCount <= 0;
}

It's worth noting that I could have introduced an additional base class for AndCondition and OrCondition for which they override the evaluate method and share the Left and Right properties, but they do so little internally that it felt like overkill. Should they ever grow in size, an abstract base class can be retrofitted painlessly enough.

Summary

Overall, this is a great first step on the path to a clean and pure API, but there's still a lot more work to be done. I suspect that by the end of this series DotLiquid's API will be a significantly different beast, exposing the same functionality in a much more flexible API.

I'm really enjoying the challenge and, if you'd like me to clarify anything, feel free to let me know in the comments!

Wednesday, 21 January 2015

Restructuring DotLiquid: Part 1

Introduction

In the previous series, Optimising DotLiquid, the focus was to improve rendering performance. In this series, the focus is to improve DotLiquid's API.

With DotLiquid v2.0 on the far horizon, now is the perfect time to smooth any rough edges that have appeared over the course of development and distil the API into its purest, cleanest form.

What Makes a Great API?

Accessible

For an API to be accessible requires consistency in the naming convention it uses, it's method signatures and chosen design patterns. The API should also be minimalist, exposing no more public methods, objects or functionality beyond those that drive the end user's interaction with the API.

Flexible

A great API makes as few assumptions about how it will be used as possible. Keeping class coupling to a minimum, allowing the end user to pick and choose functionality with good object oriented design and keeping class dependencies to a minimum are all part of making an API flexible.

Extensible

A great API has to be easy to extend. This means making key methods virtual, classes concise and substitutable and avoiding any behind the scenes hack-magic. The principles of SOLID really come into their own when it comes to extensibility, because you never know which direction the next developer will want to go.

A Bird's Eye View

When fine tuning an API, implementation takes a back seat to architecture. After all, we're designing the interface by which developers interact with the library to achieve a goal, not how that goal is achieved.

The quickest way to get an architectural overview is to add a class diagram to the project. Here's the class diagram for DotLiquid as it stands at the end of Optimizing DotLiquid.

This diagram tells me a lot about the state of the DotLiquid API as it currently stands.

The classes with a dotted outline are helper classes, extension classes and containers for commonly used resources. This is fine in a small project, but in an API this could be preventing a third party from tweaking core functionality. I'll be looking to see what can be refactored into instance classes that are provided as configuration to templates, improving flexibility and customisability.

The class Condition isn't respecting the Single Responsibility Principle. It currently has the responsibilities of evaluating an expression, evaluating an AND condition and evaluating an OR, too. ElseCondition and the property IsElse aren't the OOP ideal, either, so refactoring of the condition hierarchy will yield benefits for Extensibility.

The down arrow marked against quite a few of the methods in this diagram indicates the use of the internal access modifier. In the places that it's been used, it would appear that these methods are being used as back door access to functionality that isn't exposed publicly. This is a code smell that harms extensibility and may indicate deeper structural issues, so I'll be looking to do away with them completely.

The Tag class and associated hierarchy has a wide, shallow inheritance structure that is self-explanatory. This is an example of great Object Oriented Design. Other than a few public and internal methods I'd like to clean up, I doubt there's much work to be done to the already clean, accessible signature seen here.

What's Next?

In the next post of this series I'll single out an area of DotLiquid's architecture that could use improvement, explain why such improvements are needed and then implement the changes with before and after class diagrams...

It's going to be awesome!