Quality in an event-driven plugin based browser framework. 

(Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

We’ve designed, developed from scratch and are running an event-driven plugin based browser framework that we call Instructions Steps (IS). It helps us visualize 3D models and building instructions on the BuildIn3D and the FLLCasts platforms. Currently it consists of hundreds of extensions separated in about ~50 repos with 587 releases. We’ve figured out a way to keep the quality of the whole framework at a really good level with almost no bugs and errors.

This article is about how we are doing. The main purpose is to give an overview for newcomers to our team, but I hope the developers community could benefit from it as a whole.

The IS architecture (for context)

I will enter into the details about the architecture of IS in another article. For this article it is enough to say that IS consists of a really small core – 804 lines of code and a lot of extensions.

There are many extension that are extending the framework. Most of them are under 200 lines. The framework is highly decoupled and “everything is an extension”. Look at the 3D model and building instruction below. The “previous” button is an extension. The “next” button is an extension. You could have “parts list”, “bill of materials”, play animations, fit and rotate the camera. These are all extensions. I took this idea for the way we were building plugins for Eclipse (many years ago).

Sphere from Geosmart GeoSphere, but this time in 3D

What is the problem with quality and how do we keep delivering a quality product?

Event-driven plugin based architectures have many advantages – like decoupling the plugins which makes them more maintainable. It forces you to have clear API boundaries which also makes them more maintainable. But there are a lot of questions and drawbacks compared to a nice Monolith app. What should we test? Should we test a specific extension, or the repo or the extension as it is working with all of its direct dependencies. What kind of specs should we develop? Should we have small unit spec that tests the extension in an isolation or we should put all the hundreds of extensions and test them all together. How do we make make these decisions?

Here are the few simple rules that we try to follow.

Compilation and type checking with Google Closure Compiler in ADVANCE mode

We use vanilla JavaScript. No TypeScript. There are reasons for this. Nothing against TypeScript actually. We use Google Closure Compiler to compile each and every extension.

Here is an example of a declaration of an “interface method”

/**
 * Loads the given url and returns a Promise that when resolved will provide the caller with a {@link IS.StepsTree.StepData}.
 *
 * @export
 * @param  {string|File} file - url to the file or a DOM File object to be loaded
 * @return {Promise} Promise that when resolved will provide a {@link IS.StepsTree.StepData} which is the root step
 */
IS.StepsTree.IProvider.prototype.getStepsTree = function(file) {};

Looking at the code we have the jsdoc annotations like “@param”, “@return”, “@export”. These are annotations that GCC understands and will check. It will check if the param is of the given type, it will check if the returned value is of the given type. It will check if the classes that implement this interface actually implement it.

Google Closure Compiler (GCC) will check if we are trying to access properties and methods that are not available.

As a general rule of thumb – compilers are strict. If they understand your code, and compile it, then your code fulfills a bare minimum of requirements.

GCC has helped us a lot. It takes some time to get used to it and to learn all the annotations and how to use them and how to develop SDK and libraries that are compiled, but it pays off. I’ve previously shared about our experience with GCC. Here is one lecture that I gave a few times – https://github.com/thebravoman/google-closure-compiler-presentation/blob/main/gcc_presentation.md

Each extension is tested in isolation only with its direct dependencies available

The navigation extensions are located in the repo “is-navigation”. When we test the functionality of the “Next” button we don’t expect to also have the “Fullscreen” or the “Animations” extensions available.

Each extension is tested automatically in isolation, because each extension should work on its own given that it is the only extension that is installed (and the direct dependencies of course). Which makes sense. We are building a framework, a platform. When we have a framework, platform or even OS we should be able to install one extension, app, or program and they should be able to work on their own.

For testing the extensions we use Jasmine and Teaspoon and I wrote an article about how and why we do it.

All extensions are tested together in the ‘release_pack’

What teams building platforms and frameworks quickly find out is that all the extensions and apps can work separately, but there are a lot of cases where if you put them all together and install them, things start to get more difficult. An example are all the different problems different OS have. One program is affecting another program in an unpredictable way.

So we’ve build the is-release_pack. What it does is to put all the extensions together and to run a few basic tests on all of them.

It contains 1-2 specs that check that each extension is working in the general case and probably one or two very specific cases. All the other specific cases are tested in the extensions, not in the “release_pack”. We push everything we can to the specs of the specific extension, but we have a few “integration” specs that are in the is-release_pack. And it is beautiful.

The downside of integration specs

There is one major downside with integration specs:

All of us, developers, are lazy when it comes to really building it right. Once we build the feature and we see that it is working after a day of work there is little motivation in us to spend the next 3 days on building it right. It just feels so go to have it working that you commit and move on to the next thing.

When there is a problem and an integration spec is failing most of the time it is easier to go and “ease” the integration spec. Change the expects a bit. Modify them. Even remove them.

Other times when we have to develop a specific spec for the specific extension it feels easier to develop an integration spec instead of 20 specific specs in the repo for the extension.

Sooner or later you end up in one of these situations:

  1. There are no integration specs or they contain expects and assertions that can not validate that your product is working correctly.
  2. There are a ton of integration specs that are constantly and randomly failing from time to time. The “integration” specs suite also takes forever to pass as there are now so many “integration specs”.

Both of this situations are highly undesirable.

Resolving the downside of integration specs

One thing I learn writing business plans when applying for different VC funding is RACI.

There are people Responsible for the job, people Accountable for the job, people that could be Consulted and people that should be kept Informed.

So who is Accountable for the delivery of the IS framework and for the framework working correctly with all extensions in the user browser?

Ideally it should be one person. In our case – it is Me.

We are all responsible for the implementation. But in a team one should be held Accountable if something is not working and not right. One is Accountable for not checking. This person could change of course, but at any given moment there is someone “starting the engine of the car” as it exists the factory. You should start the engine and make sure the car works. You are accountable for checking it. You might not be responsible if it does not start, but you are accountable for checking.

With the is-release_pack we resolved this for us.

Only the Accountable (Me in our case) has access to the is-release_pack and its specs. Nobody else. You can not add integration specs, you can not remove, you can not even change on your own. The person that is accountable should do it. We keep the number of specs to a mininum – one basic scenario for each extension and when appropriate 1-2 (but no more) very specific scenarios for each extension. In the release_pack we prefer to has scenarios that involve more than one extension. In fact if there is a scenario that involves all the extensions we would probably use it.

Integration specs are coupling the extensions?

Yes. They are. When one extension fails the integration spec for all the extensions fail. That is true. With hundreds of extensions if every day a different extension “fails” then you will not have a successful run of the integration suite in years.

But the customer “does not care”. The integration spec is the closes spec to the customer experience. The users never interacts with a single extension. They interact with all extensions.

In the same time if an extension has reached the release_pack and is failing the release pack , we will go and add a new spec, but not in the release_pack. We add it into the repo for the specific extension. This protects us from regressions.

Conclusion

By having a small subset of integration specs in a project to which only I have access and that is the final step in the release pipeline we’ve managed to stop hundreds of releases that would break existing clients, would lose a feature or introduce a bug.

587 official releases already and it takes 5 to 10 minutes to release the whole framework. Integration specs are present in the release_pack, but we keep them to a minimum, each testing many extensions at once and making sure that a real life client scenario is working.