Monday, September 19, 2016

React: Controlling state to accelerate development

I first started looking at the React JavaScript library from Facebook in early 2014. I had previously worked on sites using MVC frameworks like Angular and Backbone and at first it was hard to see what problem React was trying to solve. The other frameworks had an API for templating and even HTTP calls, but React could only render HTML in the browser. When I dug into the design it soon became clear what the goal of React is. It controls the amount of state required to render UI elements and it allows developers to create components with a single responsibility that can be safely composed into a whole system. The following is a high level overview of how React achieves this.

How React controls state

The core principle of React is to control and encapsulate the state of the user interface. React components do this by exposing two objects, Props and State.

Props

Props are supplied by a parent component to the children it creates. The props object is set at the time the component is created and it never changes, therefore it is immutable. This enforces one-way data binding so that an element, for example an input control on a form, can be populated with a value held in the props object.

However, when the user changes the value in the input field it will not change the value from the props object that it is bound to. The advantage of this is that values in the props object can be safely used anywhere in the UI and we know that they will not change the state of the application. The only place they can be changed is in the component that owns the original state. Components that only use props are stateless which means their behaviour is derived from the inputs given to them.

State

State is an object of a component and it is private to the component. The state can only be shared with child components by passing the values as props. The children can not change the state, only render it. If a child does need to mutate the state of the parent, the parent component provides a callback function in the props object. This allows the parent to decide how and when it will mutate the state. This builds upon the one-way data binding model into a one way data flow model for the application. With React, the user interface is composed of mainly stateless components with some stateful parent components controlling how state changes. With this model, the data flows down the tree of components and messages flow back up the tree in the form of callback functions to trigger state changes.

How React applies functional programming

The design of React is heavily influenced by principles common to functional programming. In fact the first prototype of React was created in OCaml. Stateless components are essentially pure functions and the encapsulation of the mutation of state is a core tenant of most functional programming languages. The building of a user interface by composing smaller components together in React is the same as function composition in functional programming. I am pleased to see the principles of functional programming forming the basis for a very popular library, as I believe that they lead to safer software that is quicker to develop.

Conclusion

React brought the principles of functional programming to the JavaScript ecosystem. It also captures a view that I have that software should be developed so that state is never changed in more than one location. I have seen many applications become brittle and impossible to extend because state is being changed in a number of places. The change becomes difficult because all the points of mutation must be found and checked to see that they still work with the change being made. A single point of mutation makes this problem go away.

React has now reached a point of maturity where the API has stabilized as have the supporting tools like Redux and react-devtools. It is the tool I now choose to build new web applications. It forces me to think up front about all the components I need for the user interface and which ones will hold state and which ones can be stateless. For more information look at Pete Hunt’s great post or try React using the new create react app tool.

Friday, May 23, 2014

CQS talk at Brighton Alt.Net

In March I gave a talk at the Brighton Alt.Net meeting about applying the Command and Query Separation pattern to application design. This is a technique that I have been using for sometime to help me break up systems with bloated controllers or manager classes that are doing to much.


CQS Talk Brighton Alt.Net from Keith Bloom on Vimeo.
In the talk a mention a few resource:
The code from the talk is available up on github.

Monday, December 02, 2013

F# in Finance conference

On Monday 25th November I attended the F# in Finance conference at the Microsoft offices in London. I was drawn to this single day conference as I have been learning about functional programming for some time now. I am also interested in the finance sector as it seems paradoxical to me. On the one hand it appears to have ageing IT systems and an ardent use of Excel.  This seems like a bizarre way to run any business let alone a financial institution. On the other hand they can be at the forefront of innovation in software development. Indeed it is arguably the biggest commercial adopter of functional programming so far. So, I was keen to hear how this industry was changing and to learn if there were any lessons I could use in my own programming.

The day consisted of 10 talks, an ambitious goal for a single day. The most interesting theme that I picked up on was how productive many of the speakers felt when writing F# compared to C#. Jon Harrop and Phil Trelford both talked about how modelling complex domains was vastly simpler in a functional language than in an object oriented one. Phil explained how the energy trading system he maintains has a domain model which is just a single, two hundred line file. If this were to be implemented in an object oriented language the model would span hundreds of classes.

From the discussion about domain modelling it appears that functional languages are better at separating the data from the behaviour. This is still abstract in my mind so I have much to learn. What is more concrete for me are the language features that help productivity. When asked in a panel session, the speakers said that a lack of null values, immutability and the built in actor model are the main benefits when using F#. A lack of null values and immutability seem like an obvious gain. Null reference errors are a very common error in most systems. Mutated state is also a source of pernicious bugs. A rogue branch of code can create havoc to a well tested system if it alters some piece of state. The actor model is a higher level construct also aimed at limiting state changes in a system and in F# it is called the MailboxProcessor.

F# in Finance was a fantastic day of very focused presentations from some superb presenters. Functional programming is a clear fit for the finance sector where the domain can often be modelled in algebraic terms. Given that this is a sector where any competitive edge means vast profits I am sure the uptake of functional programming will only increase. It is good to see F# and, consequently, the CLR gaining a foothold. Thanks to the presenters I now have a clearer understanding of the advantages of functional programming and will be investigating further to see how I can improve my programming skills.

Wednesday, November 27, 2013

Functional JavaScript book review

I was very excited to receive my copy of Functional JavaScript by Michael Fogus as I am interested in, and have views on, both Functional programming and JavaScript. My view  of the functional programming community is that it is full of very clever people who are focused on creating software which is robust and malleable. This is probably because the concepts behind functional programming are hard to understand. It is also because it has a closer relationship to various branches of mathematics. My opinion of JavaScript is that it is the most ubiquitous programming language we have ever known. It is a language with some good features, but it has to be handled with care. The need for care is even greater when using it to program the DOM as this is a very complex API.

The use of functional programming in JavaScript is not a new idea, indeed it has many influences from Lisp and Scheme. But it is very good to see someone write a book exploring the topic. The style of the book is very conversational and each chapter moves up through the complex layers of functional programming.

At the beginning the focus is on higher order functions (functions taking in other functions as parameters) all the way to flow based programming and a brief overview of monadic programming. This structure demonstrates very well how functions can be composed together to create bigger programs. Functions written in each chapter re-appear in later ones to be part of a bigger whole.

I have read this book once and I am working my way through it again. It is rich with ideas for any JavaScript programmer.  The concepts of functional programming certainly stretched my imperative programmers mind. Stretched as it was, I enjoyed seeing Michael Fogus take an imperative process and re-implement it as a series of functions composed together. Functional JavaScript is a very enjoyable read and I would recommend you pick up a copy.

Thursday, May 16, 2013

Investigating ASP.Net MVC: Extending validation with IValidatableObject

Introduction

Frameworks are an essential part of programming. They help developers achieve complex tasks by presenting them with a simplified API over a more complex system. In my experience, it is possible to use a framework and be productive without giving too much thought to how it works.

However, I like to understand how things work. I am interested in the choices made by the framework designers. I feel that by knowing how they are built my ability to code improves and I can work with the framework more efficiently.

In this blog post I begin my investigation of the ASP.Net MVC framework. I will start by examining one part of the framework, the model binding process. How this works and how it can be extended. I will look at how the choices made by the framework designers influence the code I write and my understanding of the framework.

How flexible is the framework

The framework designer has a tricky balancing act. A good framework is simple to understand, hides the system it is abstracting and allows for easy extension. The extension points are the API and, to create them, the framework designers have several tools to choose from. The most common are, composition, inheritance, and events. The choice they make will have a big influence on the code I end up writing.

The ASP.Net MVC framework is an abstraction over HTTP requests and respsonses. It includes all three types of extension mechanisms. It has been designed to create HTML applications where the server is responsible for creating the markup which will be sent to the client. This is different from frameworks where the browser creates markup using a set of web services. The generation of HTML on the server was a guiding principle of the original design and has had the most influence on the API.

Model binding, deep within the framework

I am focusing on the model binding process which takes raw HTTP requests and creates real types which can be passed to controller actions. To understand its purpose I must first understand what ASP.Net MVC does when it handles a request:
  • When a HTTP request is made the routing engine picks it up and loads the relevant controller
  • The controller examines the request and decides which action will handle it
  • When the action has been identified the controller will delegate to the model binder to create the parameters for the action method from the request data
  • When the model binder has created the objects for the action method, it checks they are valid. If they are, any validation errors are added to the controllers ModelState object
Now I understand the flow of data through the framework, I can use it in my dummy application. This application allows people to tell me their favourite food so that I can keep some statistics on the favourite foods of the world. Unfortunately, now and then, someone types in "House" to try and skew the results. My task then is to add validation to the application to prevent this.

So far my application consists of a form, a view model object which will represent the input and a controller to handle the request



My controller action checks the validity of the input and will either update the statistics or return the form where MVC will display the errors for me. My FoodViewModel class will never fail validation though as the framework has no knowledge of what I consider an invalid request. To achieve that I have to implement some form of validation. One solution is to add the validation logic to controller action

My controller now checks the form data to see if anyone has entered house as their favourite food. If present, I add a my error to the ModelState collection which also sets the validity of the ModelState to false. My controller will now detect invalid requests.

The controller code above demonstrates a common mistake I see in MVC applications. Here the controller is doing too much work and the code is failing to use the extensions available in the framework. Instead, the FoodViewModel can be extended to work with the model binding process to handle the validation in a more elegant and focused manner.

Extending the validation process

There are two ways that I can augment my FoodViewModel with validation rules. Simple validation can be achieved by decorating properties with attributes like [Required] or [StringLength]. The model binder will detect these and assert the rules accordingly.

For more complex validation the framework designers chose composition as a way for my code to participate in validation and created the IValidatableObject interface.

This has a method called Validate which accepts a ValidationContext and returns an enumerable of ValidationResults. To show how this works I have updated FoodViewModel to implement the interface.

It implements the interface by defining the Validate method so that when the model binder runs it can ask my object to validate itself. If the FavouriteFood property contains the word "House" it returns an error message.

Coding to a contract

The IValidatableObject interface is a contract between the model binder and my view model which allows them to work together. The FoodViewModel is declaring that it can behave as an IValidatableObject. This allows the model binder to ask if it is valid.

For the model binder this is a powerful tool. By defining this interface the model binder achieves two things, it can open itself up to the outside world and it can delegate the job of validation to someone else. This code demonstrates how the model binder can implement this

To mimic the process used by the model binder I use reflection to create an instance of the FoodViewModel and then cast it to an instance of IValidatableObject. If the cast succeeds I call the Validate method (to keep the example simple I pass in null for the validation context). Any errors that are returned I store in my error collection. Finally, I output all the messages to the console.

This code shows the power and simplicity of composition. The example code is focused on managing the process of collecting errors from other objects. It does not have any knowledge of how to validate an object but it uses a known contract to collect the results. The process of validation has been extracted and put in the IValidatableObject interface. This allows other code to extend the process by supplying their own implementations for the validation process. When this happens the two processes create a single process which does more than they could independently. This is the goal of composition, combining many simple objects to create a more complex one.

Conclusion

I feel that too often developers fail to think about the way a framework is intended to be used or what decisions have been made to abstract the lower level system. A typical indication of a lack of thinking is an application which recreates existing parts of the framework. Exploring the code and the API of a framework helps me to avoid this. I also expand my knowledge of how to use it efficiently and how to design my own code.

Examining the model binder process has given me a greater knowledge of how ASP.Net MVC takes a HTTP request and generates an object for a controller action. Understanding this complex process allows me to work with the framework so that I can extend my code in the simplest way possible to achieve the goal of validation.

I also gain knowledge by studying how composition is used in a complex process. I am now able to apply this powerful design pattern to my own code. I feel that studying existing code is an excellent way to expand my knowledge and, to be honest, I find it fun to learn how things work.

Sunday, September 23, 2012

SQL Baseline has joined the ChuckNorris Framework

I am very pleased to say that Rob and Dru have added my SQL Baseline tool to the Chuck Norris Framework. As part of SQL Baseline’s inauguration it has been renamed as PoweruP to fit alongside the likes of RoundhousE, DropkicK and WarmuP. The project has been moved over and can be found here.

I created PoweruP to help me configure RoundhousE to manage a number of existing databases. This is not an easy task and can be a barrier which stops people trying out RoundhousE as is shown by this conversation


This is a shame because once RoundhousE is setup it greatly increases development speed, it is simple to maintain and brings database development inline with application coding. What can stop people using it, is the need to extract all the stored procedures, views, functions, etc, from the database. With one command PoweruP will scaffold a new RoundhousE project from an existing database. Plus It will create the scripts and put them in the default RoundhousE folder structure. For a more detailed explanation see this post.

I am very pleased for PoweruP to be part of the Chuck Norris framework. I hope it will help more development teams to get started using RoundhousE because it is the best tool I have found for managing changes to the database schema.

Tuesday, September 18, 2012

Using 0MQ to communicate between threads

In this post I show how 0MQ can help with concurrency in a multithreaded program. To do this, I explore what concurrency means and why it is important. I then focus on in-process concurrency and threaded programming, a topic which is notoriously tricky to do well due to the need to share some kind of state between threads. I explore why this is and how this is typically tackled. I will then show how communication between threads can be achieved without sharing any state using 0MQ. Finally I propose that by constructing our multi-threaded applications using the 0MQ model, that this leads us to more succinct and simpler code.

All code can be found in this github project

What is a concurrent program?

The word concurrent means more than one thing working together to achieve a common goal. In computing this means doing one of two things; something which is computationally expensive, like encoding a video file, or something that requires some sort of IO, like retrieving the size of a number of web pages.

The opportunity to employ concurrency has exploded with the arrival of multicore processors and the rise of hosted processing platforms like Amazon EC2 and Windows Azure. These two changes represent the two ends of the concurrency spectrum. To achieve concurrency on a multicore processor we create threads within our application and manage how they will share state. Whereas achieving concurrency using something like EC2 is network based and requires the use of a communication channel like TCP. When communicating over the network, state is handled by passing messages.

0MQ recognises that the best way to create a concurrent program is to pass messages and not to share state. Whether it is two threads running within a process or thousands of processes running across the internet, 0MQ uses the same model of sockets and messaging to create very stable and scalable applications.

Multiple threads shared state and locks

In .Net any program that must do more than one task at a time must create a thread. Threads are a way for Windows to abstract the management of many different streams of execution. Each thread gets it’s own stack and set of registers. The OS will then handle which thread is to be executed at one time.

The problem with threads is that when they have to communicate with each other the typical way is to share some value in memory. This can cause data corruption as more than one thread could be accessing the data at one time, so the application has to manage access to the shared data. This is done by locking the shared data, ensuring that only one thread can manipulate it at any one time. This mechanism adds complexity to an application as it must include the locking logic. It also has an effect on performance.

0MQ multiple threads and no shared state

0MQ makes threaded programming simpler by swapping shared state for messaging. To demonstrate this I have created a simple program which calculates the size of a directory by adding up the size of each file it has.

As we are using 0MQ we have to understand some of the concepts it uses. The first concept is static and dynamic components. Static components are pieces of infrastructure that we can always expect to be there. They usually own an endpoint which can be bound to. Dynamic components come and go and generally bind to endpoints. The next concept is the types of sockets provided by 0MQ. The implementation we’ll be looking at uses two types of sockets, PUSH and PULL. The PUSH socket is designed to distribute the work fairly to all connected clients, whilst the PULL socket collects results evenly from the workers. Using these socket types prevents one thread from being flooded with tasks or left idle waiting for it’s result to be taken.

Finally the 0MQ guide has a number of patterns for composing an application depending on the type of work being done. The example below calculates the size of a directory by getting the size of each file and adding them together. To achieve this task in 0MQ, a good choice is the task ventilator pattern.

 

In the diagram each box is a component in our application and components communicate with each other using 0MQ sockets. There are two static components in this application, the Ventilator and the Sink. There will only be one instance of each in the application and they will run on the same thread. There is one dynamic component, the Worker. There can be any number of workers and each one runs on it’s own thread.

To calculate the size of the directory, the Ventilator is given a list of files from the directory. It sends the name of each one out on it’s message queue.

When the Sink is started, it is given a number of files to count the size of, in this instance we pass in the length of the array that we passed to the Ventilator. The Sink then pulls in the results from each of the workers and increments the running total for the size of the directory. When it has finished it returns the total size of the files found.

The Worker connects to the Ventilator and Sink end points and sits in an endless loop.

When a message arrives from the Ventilator it triggers an event which causes the Worker to read the file from the disk to find its size. When the operation completes the Worker publishes the size to the Sink’s end point.

All the components are brought together in the controlling program. We create a 0MQ context which will be shared with all the components. This is an important point when using 0MQ with threads, there must be a single context and it must be shared amongst all the threads. We then create instances of the Ventilator and Sink passing in the context.

Next we create five workers each on their own thread, again passing in the 0MQ context.

We do the work by building an array of files from our directory and passing this to the Ventilator. We tell the Sink how many results to expect and wait for the result to be returned.

When we have the final number we print it on the console. At no point in the process did any thread have to update a shared value.

Conclusion

In this post I investigated the programming challenges faced when dealing with concurrency, focusing on those specific to threaded concurrency. I have shown how 0MQ approaches this problem with the view that concurrency should never involve sharing state and communication is best handled by passing messages between processes. To demonstrate how this works I created a simple program to calculate the size of a directory and used the 0MQ task ventilator pattern to structure the program. By following this pattern the software is broken down into very specific parts to perform a job. All knowledge of how to read the size of a file is held in the worker. If we discover a better way to read the size of the file this component can be changed without any impact on the rest of the program. This isolation is a consequence of only allowing communication between the key components over a message channel. Therefore the code is simpler as each component does only one job.

All code can be found in this github project