Tagged: javascript Toggle Comment Threads | Keyboard Shortcuts

  • kmitov 7:13 am on November 10, 2021 Permalink |
    Tags: , , javascript, , ,   

    Migrating to jasmine 2.9.1 from 2.3.4 for teaspoon 

    We finally decided it is probably time to try to migrate to jasmine 2.9.1 from 2.3.4

    There is an error that started occurring randomly and before digging down and investigating it and and the end finding out that it is probably a result of a wrong version, we decided to try to get up to date with jasmine.

    Latest jasmine version is 3.X, but 2.9.1 is a huge step from 2.3.4

    We will try to migrate to 2.9.1 first. The issue is that the moment we migrated there is an error

    'beforeEach' should only be used in 'describe' function

    It took a couple of minutes, but what we found out is that fixtures are used in different ways.

    Here is the difference and what should be done.

    jasmine 2.3.4

    fixture.set could be in the beforeEach and in the describe

    // This works
    // fixture.set is in the describe
    describe("feature 1", function() {
      fixture.set(`<div id="the-div"></div>`);
      beforeEach(function() {
      })
    })
    // This works
    // fixture.set is in the beforeEach
    describe("feature 1", function() {
      beforeEach(function() {
        fixture.set(`<div id="the-div"></div>`);
      })
    })

    jasmine 2.9.1

    fixture.set could be only in the describe and not in the before beforeEach

    // This does not work as the fixture is in the beforeEach
    describe("feature 1", function() {
      beforeEach(function() {
        fixture.set(`<div id="the-div"></div>`);
      })
    })
    // This does work
    // fixture.set could be only in the describe
    describe("feature 1", function() {
      fixture.set(`<div id="the-div"></div>`);  
      beforeEach(function() {
        
      })
    })
     
  • kmitov 7:33 am on October 8, 2021 Permalink |
    Tags: hotwire, javascript, , turbo   

    [Rails, Hotwire] Migrate to Turbo from rails-ujs and turbolinks – how it went for us. 

    We recently decided to migrate one of our newest platforms to Turbo. The goal of this article is to help anyone who plans to do the same migration. I hope it gives you a perspective of the amount of work required. Generally it was easy and straightforward, but a few specs had to be changed because of urls and controller results

    Gemfile

    Remove turbolinks and add turbo-rails. The change was

    --- a/Gemfile.lock
    +++ b/Gemfile.lock
    @@ -227,9 +227,8 @@ GEM
         switch_user (1.5.4)
         thor (1.1.0)
         tilt (2.0.10)
    -    turbolinks (5.2.1)
    -      turbolinks-source (~> 5.2)
    -    turbolinks-source (5.2.0)
    +    turbo-rails (0.7.8)
    +      rails (>= 6.0.0)

    application.js and no more rails-ujs and Turbolinks

    Added “@notwired/turbo-rails” and removed Rails.start() and Turbolinks.start()

    --- a/app/javascript/packs/application.js
    +++ b/app/javascript/packs/application.js
    @@ -3,8 +3,7 @@
     // a relevant structure within app/javascript and only use these pack files to reference
     // that code so it'll be compiled.
    
    -import Rails from "@rails/ujs"
    -import Turbolinks from "turbolinks"
    +import "@hotwired/turbo-rails"
     import * as ActiveStorage from "@rails/activestorage"
     import "channels"
    
    @@ -14,8 +13,6 @@ import "channels"
     // Collapse - needed for navbar
     import { Collapse } from 'bootstrap';
    
    -Rails.start()
    -Turbolinks.start()
     ActiveStorage.start()

    package.json

    The change was small

    --- a/package.json
    +++ b/package.json
    @@ -2,10 +2,10 @@
       "name": "platform",
       "private": true,
       "dependencies": {
    +    "@hotwired/turbo-rails": "^7.0.0-rc.3",
         "@popperjs/core": "^2.9.2",
         "@rails/actioncable": "^6.0.0",
         "@rails/activestorage": "^6.0.0",
    -    "@rails/ujs": "^6.0.0",
         "@rails/webpacker": "5.4.0",
         "bootstrap": "^5.0.2",
         "stimulus": "^2.0.0",

    Device still does not work

    For the device forms you have to add “data: {turbo: ‘false’}” to disable turbo for them

    +<%= form_for(resource, as: resource_name, url: password_path(resource_name), html: { method: :post }, data: {turbo: "false"}) do |f| %>;

    We are waiting for resolutions on https://github.com/heartcombo/devise/pull/5340

    Controllers have to return an unprocessable_entity on form errors

    If there are active_record.errors in the controller we must now return status: :unprocessable_entity

    +++ b/app/controllers/records_controller.rb
    @@ -14,7 +14,7 @@ class RecordsController < ApplicationController
         if @record.save
           redirect_to edit_record_path(@record)
         else
    -      render :new
    +      render :new, status: :unprocessable_entity
         end
       end

    application.js was reduced significantly

    The old application.js – 923 KiB

      application (932 KiB)
          js/application-dce2ae8c3797246e3c4b.js

    The new application.js – 248 KiB

    remote:        Assets: 
    remote:          js/application-b52f4ecd1b3d48f2f393.js (248 KiB)

    Conclusion

    Overall a good experience. We are still facing some minor issues with third party chat widgets like tawk.to that do not work well with turbo, as they are sending 1 more request, refreshing the page and adding the widget to an iframe that is lost with turbo navigation. But we would probably move away from tawk.to.

     
  • kmitov 6:29 am on October 8, 2021 Permalink |
    Tags: , javascript, ,   

    [Rails] Warden.test_reset! does not always reset and the user is still logged in 

    We had this strange case of a spec that was randomly failing

      scenario "generate a subscribe link for not logged in users" js: true do 
        visit "/page_url"
    
        expect(page).to have_xpath "//a[text()='Subscribe']"
        click_link "Subscribe"
        ...
      end 

    When a user is logged in we generate a button that subscribes them immediately. But when a user is not logged in we generate a link that will direct the users to the subscription page for them to learn more about the subscription.

    This works well, but the spec is randomly failing sometimes.

    We expect there to be a link, eg. “//a” but on the page there is actually a button, eg. “//button”

    What this means is that when the spec started there was a logged in user. The user was still not logged out from the previous spec.
    This explains why sometimes the spec fails and why not – because we are running all the specs with a random order

    $ RAILS_ENV=test rake spec SPEC='...' SPEC_OPTS='--order random'

    Warden.test_reset! is not always working

    There is a Warden.test_reset! that is supposed to reset the session, but it seems for js: true cases where we have a Selenium driver the user is not always reset before the next test starts.

    # spec/rails_helper.rb
    RSpec.configure do |config|
      ...
      config.after(:each, type: :system) do
        Warden.test_reset!
      end
    end

    Logout before each system spec that is js: true

    I decided to try to explicitly log out before each js: true spec that is ‘system’ so I improved the RSpec.configuration

    RSpec.configure do |config|
      config.before(:each, type: :system, js: true) do
        logout # NOTE Sometimes when we have a js spec the user is still logged in from the previous one
        # Here I am logging it out explicitly. For js it seems Warden.test_reset! is not enough
        #
        # When I run specs without this logout are
        # Finished in 3 minutes 53.7 seconds (files took 28.79 seconds to load)
        #   383 examples, 0 failures, 2 pending
        #
        # With the logout are
        #
        # Finished in 3 minutes 34.2 seconds (files took 21.15 seconds to load)
        #   383 examples, 0 failures, 2 pending
        #
        # Randomized with seed 26106
        # 
        # So we should not be losing on performance
      end
    end

    Conclusion

    Warden.test_reset! does not always logout the user successfully before the next spec when specs are with Selenium driver – eg. “js: true”. I don’t know why, but that is the observed behavior.
    I’ve added a call to “logout” before each system spec that is js: true  to make sure the user is logged out.

     
  • kmitov 6:44 am on March 30, 2021 Permalink |
    Tags: , javascript, , , , , ,   

    Dead code and one more way it could hit you back – by being loaded! 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We can all agree that dead code should be removed. What we sometimes fail to see is how much it could cost to leave dead code with our product. Today’s article is about an example of a “dead code” and how it hit us back 1.5 years after we stopped using it. It took us 3 hours to debug and find the root cause. The conclusion is that we should have done ‘git rm’ an year ago. We did not and the cost was 3 hours.

    This is a real life example article from our code base and it is design to share experience with our team, but I hope the whole community could benefit.

    What happened

    1. Jenkins build failed

    First time we saw it we identified that it is not something serious and it kept failing for a few days until we had time to address it in the next Sprint.

    2. Teaspoon, Jasmine, Bundler, Rails Engines

    In the project there was the production code in Rails.root along with a dummy test code for starting JavaScript specs in Rails.root/test/dummy/. We are using Teaspoon with Jasmine in a Rails engines. Rails engines tend to create a test/dummy folder in which you could put the code for a test app in which you want your JavaScript specs to be executed. The product code is loaded in the test app and specs are executed.

    About 1.5 years ago I did some experiments with this test app and made a few configurations. The goal was to have the production code along with some test app code both loaded in the test environment. In this way Teaspoon and Jasmine would see both the production and the test app code as coming from “production”. I literally remembered what my idea was and what I was trying to achieve. At the end I decided not to use the test app code, but to have only the production code located in Rails.root be loaded in the test environment.

    But I forgot to remove the test app code from test/dummy/app/assets/javascripts/

    3. Teaspoon Error

    The following error occurred. Here “IS” is a JavaScript namespace part of the production code.

    jenkins@vpszap6s:~/jobs/is-core Build and Release/workspace/test/dummy$ xvfb-run -a bundle exec rake teaspoon --verbose
    Starting the Teaspoon server...
    Teaspoon running default suite at http://127.0.0.1:36051/teaspoon/default
    ReferenceError: IS is not defined
    
    

    4. The error and debugging process

    Not sure what the error was exactly I tried to identify the difference between my machine and the server machine where the tests was failing. This server machine is called “elvis”.

    1. Bundler version – my version was 2.2.11 and the elvis version was 2.2.14. Tried with upgrade, but it did not resolve the issue.
    2. Chrome driver – since the tests were executed on chrome I saw the chrome driver versions were different. Mine was 86. elvis was with 89. Synced them, the error was still occurring.
    3. Rake version – rake versions were the same
    4. Ruby version – the same
    5. Teaspoon and Jasmine versions – the same
    6. The OS is the same
    7. I could not find any difference between my env and the elvis env.

    Turns out the problem was in the loading order. elvis was loading test app code and then production code. My machine was loading production code and then test app code. I could not figure out why. The whole test app code was:

    // test/dummy/app/assets/javascripts/ext/dummy/dummy.js 
    IS.Dummy = {};

    The error that was occurring was:

    ReferenceError: IS is not defined

    5. Solution

    I was curious to continue debugging to see why the two machines were loading the code in a different order, but at the end decided against it. Just removed the dead code in test/dummy/app/assets/javascripts/ext/dummy/dummy.js which I only thought as dead, but it turned out it was affecting our project.

    6. Builds passing

    Finally the builds were passing.

    Conclusion

    Dead code my not be that dead after all. It might be loaded even if not used and loading order could differ. Better be safe and remove dead code at all. If we need it that much, we have it in the GIT repo history.

     
  • kmitov 1:34 pm on March 25, 2021 Permalink |
    Tags: javascript, , ,   

    Stimulus 1.1.1 to Stimulus 2.0.0 – practical cost 

    We just migrated Stimulus 1.1.1. to Stimulus 2.0.0 so I decided to share this with our whole team, but I thought this could be useful for the whole community.

    The practical cost is that this migration could be done in a few minutes per Stimulus controller. 10-20 controllers – you should be done in less than a day.

    Overview of the changes

    Here are a few examples of the changes that were committed for a single _form.html.erb

    html.erb

    # app/views/public_users_searches/_form.html.erb
    -            target: "public-users-searches.term",
    +            public_users_searches_target: "term",
    ...
    -    <%= f.submit t('public_users_searches.form.search'), data: {target: "public-users-searches.commit", value: t('public_users_searches.form.search')} do %>
    +    <%= f.submit t('public_users_searches.form.search'), data: {public_users_searches_target: "commit", value: t('public_users_searches.form.search')} do %>
    

    Conclusion

    If you are wondering whether you should migrate or not and how much it will cost probably you should migrate. It is not that expensive.

     
  • kmitov 7:52 am on March 24, 2021 Permalink |
    Tags: , , javascript, , ,   

    We don’t need ‘therubyracer’ and the libv8 error for compiling native extensions 

    This article is about the error:

    Installing libv8 3.16.14.19 with native extensions
    Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

    and the fact that we don’t need and probably you don’t need therubyracer.

    There are two solutions for the above error:

    # To install libv8 with a V8 without precompiling V8 
    $ gem install libv8 -v '3.16.14.19' -- --with-system-v8
    
    # Or to comment therubyracer in your Gemfile and not to use it.
    # gem 'therubyracer'

    We went for the later. Here are more details

    What is libv8 and therubyracer

    From libv8

    V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others.

    What therubyracer does is to embeds the V8 Javascript Interpreter into ruby.

    In Rails there is the execjs gem used in assets precompilations.

    ExecJS lets you run JavaScript code from Ruby. It automatically picks the best runtime available to evaluate your JavaScript program, then returns the result to you as a Ruby object.

    https://github.com/sstephenson/execjs

    This means that execjs needs a JS runtime and it uses the one provided by therubyracer, libv8 and V8. But it can use other runtimes as well, like Node.

    What is the libv8 compilation error?

    The libv8 error is because it can not compile V8 on the machine. V8 is developed in C++. To successfully compile it the machine must have all the dependencies needed for compilation. You can try to compile it from scratch or you could use an already compiled one. Compiling from scratch takes time to resolve all the dependencies.

    But we didn’t need therubyracer

    Turns out that in this specific platform we don’t need therubyracer, because we already have Node because of webpacker/webpack and we have it on all the machines and we have it on production on Heroku.

    therubyracer is also discouraged on Heroku –

    If you were previously using therubyracer or therubyracer-heroku, these gems are no longer required and strongly discouraged as these gems use a very large amount of memory.

    A version of Node is installed by the Ruby buildpack that will be used to compile your assets.

    https://devcenter.heroku.com/articles/rails-asset-pipeline#therubyracer

    therubyracer is great for starting quickly especially when there is no node. But considering there there was node on all of our machines we just don’t need therubyracer.

    As a result the slug size on heroku was reduced from 210M to 195.9M. No bad. I guess the memory consumption will also be reduced.

    remote: -----> Compressing...        
    remote:        Done: 210M        
    remote: -----> Launching...        
    remote:        Released v5616     
    
    remote: -----> Compressing...        
    remote:        Done: 195.9M        
    remote: -----> Launching...        
    remote:        Released v5617       

    How did we came about this error?

    Every Wednesday we do an automatic bundle update to see what’s new and update all the gems (of course if the tests pass). This Wednesday the build failed, because of the libv8 error and native compilation.

    It was just an unused dependency left in the Gemfile.

     
  • kmitov 7:37 am on February 19, 2021 Permalink |
    Tags: javascript, typescript   

    “In one paragraph or less” – why I chose JavaScript over TypeScript 

    My idea with this article is to try to summarize and share “in one paragraph or less” the main reason why, as a CTO, I chose one technology over another. We take everything into account, infrastructure, team, business requirements, and many others, which could be quite complex, but can we share the essence.

    In one paragraph or less

    “I chose vanilla JavaScript compiled with Google Closure Compiler and not TypeScript because I was building an extensible plugin based framework and I wanted to allow each and every plugin developer to be free to choose vanilla JavaScript or TypeScript for their plugins. I did not want to limit anyone’s future choices.”

    The pre-story (if you are interested)

    A few days ago I called an old friend. We’ve been acting like CTOs of two companies for a couple of years (I was more acting, he was doing like the real deal). But we haven’t talked in a while. The conversation went pretty fast from “How are things at work and at home?” to “Why did you choose this technology over that technology?”

    He: – We made some interesting things on the technology front.

    Me: – Really? What?

    He: – We went for the React, TypeScript path and did…

    Me: – When I had to make this decision I stayed on the Rails, Stimulus with vanilla JS path.

    He: – You know, if it wasn’t for this and that, I would have done as you did. But you should definitely check GraphQL in more detail.

    Me: – Oh, I have and …

    This conversation went on for some time.

    I had a very similar conversions about an year ago with another CTO friend that said:

    I chose “technology X” instead of “technology Y” to keep some sanity in our team.

    So we make these decisions, but can we communicate them?

     
  • kmitov 6:39 pm on February 15, 2021 Permalink |
    Tags: javascript,   

    “if statements” are a “code smell” 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    Today we made a code review on a feature. Generally the review process in our team is – someone drops a message in the slack channel and says – “hey, I have these changes. Can you please take a look at them?”. We do not require a merge from an “authority” to get something to production. We also have one rule we are trying to follow:

    Your commit should make the product better than it previously was.

    https://www.axlessoft.com/careers/

    The commit below did not make the code better and this is the article about it. I am sure it would be useful for all of our team members and I hope it will be useful for the community as a whole.

    How “if statements” are a sign of “code smell”

    This is the commit. Do you see the problem with it?

       /**
        * @private
        * @param      {IS.StepsTree.LoadedEvent} event
        */
       onStepsTreeLoaded(event) {
    -    this._buildId = this.generateBuildId();
    +    if (!this._modeChangeOccured) {
    +      this._buildId = this.generateBuildId();
    +    } else {
    +      this._modeChangeOccured = false;
    +    }
    +
    

    The logic in the onStepsTreeLoaded method has significantly changed. It was a simple initialization of a private variable. Now it is an if statement with an else that sets the variable used in the if to a false value.

    Wow. This is even difficult to explain.

    Why was the change introduced and how the “code smell” helped us improve?

    The thing is that our colleague had to do this change to keep the behavior of the code based on a commit from 7 months ago. But now we see this smelly code and we thought:

    Why is this even needed in the first place? Why do we call this onStepsTreeLoaded method and what is it doing for us?

    Turns out that we can just move the initialization from onStepsTreeLoaded method to another method called at a different place and we can delete this method. We would keep the same behavior. There will be no regressions. The framework has changed in the last few months to the point that there is now a better place for this initialization to happen.

    My point is: “if statement”==”code smell”

    Adding an if statement to a working code is probably a code smell. Wrapping an existing code in an if statement based on unrelated state with an else that sets this same state is probably the precise definition of what “code smell” is.

    Conclusion

    Revise your assumptions. Don’t add the if statements. Think again if this is really needed, why it is needed and where is its place in the architecture of the platform.

    The concerned emotion

    Can’t think of a better way to show you the emotion of code smell than to show you this Gorilla.

    FabBRIX WWF, Gorilla in 3D building instructions
     
  • kmitov 10:25 am on January 22, 2021 Permalink |
    Tags: , , Google Closure Compiler, javascript, ,   

    Quality in an event-driven plugin based browser framework. 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We’ve designed, developed from scratch and are running an event-driven plugin based browser framework that we call Instructions Steps (IS). It helps us visualize 3D models and building instructions on the BuildIn3D and the FLLCasts platforms. Currently it consists of hundreds of extensions separated in about ~50 repos with 587 releases. We’ve figured out a way to keep the quality of the whole framework at a really good level with almost no bugs and errors.

    This article is about how we are doing. The main purpose is to give an overview for newcomers to our team, but I hope the developers community could benefit from it as a whole.

    The IS architecture (for context)

    I will enter into the details about the architecture of IS in another article. For this article it is enough to say that IS consists of a really small core – 804 lines of code and a lot of extensions.

    There are many extension that are extending the framework. Most of them are under 200 lines. The framework is highly decoupled and “everything is an extension”. Look at the 3D model and building instruction below. The “previous” button is an extension. The “next” button is an extension. You could have “parts list”, “bill of materials”, play animations, fit and rotate the camera. These are all extensions. I took this idea for the way we were building plugins for Eclipse (many years ago).

    Sphere from Geosmart GeoSphere, but this time in 3D

    What is the problem with quality and how do we keep delivering a quality product?

    Event-driven plugin based architectures have many advantages – like decoupling the plugins which makes them more maintainable. It forces you to have clear API boundaries which also makes them more maintainable. But there are a lot of questions and drawbacks compared to a nice Monolith app. What should we test? Should we test a specific extension, or the repo or the extension as it is working with all of its direct dependencies. What kind of specs should we develop? Should we have small unit spec that tests the extension in an isolation or we should put all the hundreds of extensions and test them all together. How do we make make these decisions?

    Here are the few simple rules that we try to follow.

    Compilation and type checking with Google Closure Compiler in ADVANCE mode

    We use vanilla JavaScript. No TypeScript. There are reasons for this. Nothing against TypeScript actually. We use Google Closure Compiler to compile each and every extension.

    Here is an example of a declaration of an “interface method”

    /**
     * Loads the given url and returns a Promise that when resolved will provide the caller with a {@link IS.StepsTree.StepData}.
     *
     * @export
     * @param  {string|File} file - url to the file or a DOM File object to be loaded
     * @return {Promise} Promise that when resolved will provide a {@link IS.StepsTree.StepData} which is the root step
     */
    IS.StepsTree.IProvider.prototype.getStepsTree = function(file) {};

    Looking at the code we have the jsdoc annotations like “@param”, “@return”, “@export”. These are annotations that GCC understands and will check. It will check if the param is of the given type, it will check if the returned value is of the given type. It will check if the classes that implement this interface actually implement it.

    Google Closure Compiler (GCC) will check if we are trying to access properties and methods that are not available.

    As a general rule of thumb – compilers are strict. If they understand your code, and compile it, then your code fulfills a bare minimum of requirements.

    GCC has helped us a lot. It takes some time to get used to it and to learn all the annotations and how to use them and how to develop SDK and libraries that are compiled, but it pays off. I’ve previously shared about our experience with GCC. Here is one lecture that I gave a few times – https://github.com/thebravoman/google-closure-compiler-presentation/blob/main/gcc_presentation.md

    Each extension is tested in isolation only with its direct dependencies available

    The navigation extensions are located in the repo “is-navigation”. When we test the functionality of the “Next” button we don’t expect to also have the “Fullscreen” or the “Animations” extensions available.

    Each extension is tested automatically in isolation, because each extension should work on its own given that it is the only extension that is installed (and the direct dependencies of course). Which makes sense. We are building a framework, a platform. When we have a framework, platform or even OS we should be able to install one extension, app, or program and they should be able to work on their own.

    For testing the extensions we use Jasmine and Teaspoon and I wrote an article about how and why we do it.

    All extensions are tested together in the ‘release_pack’

    What teams building platforms and frameworks quickly find out is that all the extensions and apps can work separately, but there are a lot of cases where if you put them all together and install them, things start to get more difficult. An example are all the different problems different OS have. One program is affecting another program in an unpredictable way.

    So we’ve build the is-release_pack. What it does is to put all the extensions together and to run a few basic tests on all of them.

    It contains 1-2 specs that check that each extension is working in the general case and probably one or two very specific cases. All the other specific cases are tested in the extensions, not in the “release_pack”. We push everything we can to the specs of the specific extension, but we have a few “integration” specs that are in the is-release_pack. And it is beautiful.

    The downside of integration specs

    There is one major downside with integration specs:

    All of us, developers, are lazy when it comes to really building it right. Once we build the feature and we see that it is working after a day of work there is little motivation in us to spend the next 3 days on building it right. It just feels so go to have it working that you commit and move on to the next thing.

    When there is a problem and an integration spec is failing most of the time it is easier to go and “ease” the integration spec. Change the expects a bit. Modify them. Even remove them.

    Other times when we have to develop a specific spec for the specific extension it feels easier to develop an integration spec instead of 20 specific specs in the repo for the extension.

    Sooner or later you end up in one of these situations:

    1. There are no integration specs or they contain expects and assertions that can not validate that your product is working correctly.
    2. There are a ton of integration specs that are constantly and randomly failing from time to time. The “integration” specs suite also takes forever to pass as there are now so many “integration specs”.

    Both of this situations are highly undesirable.

    Resolving the downside of integration specs

    One thing I learn writing business plans when applying for different VC funding is RACI.

    There are people Responsible for the job, people Accountable for the job, people that could be Consulted and people that should be kept Informed.

    So who is Accountable for the delivery of the IS framework and for the framework working correctly with all extensions in the user browser?

    Ideally it should be one person. In our case – it is Me.

    We are all responsible for the implementation. But in a team one should be held Accountable if something is not working and not right. One is Accountable for not checking. This person could change of course, but at any given moment there is someone “starting the engine of the car” as it exists the factory. You should start the engine and make sure the car works. You are accountable for checking it. You might not be responsible if it does not start, but you are accountable for checking.

    With the is-release_pack we resolved this for us.

    Only the Accountable (Me in our case) has access to the is-release_pack and its specs. Nobody else. You can not add integration specs, you can not remove, you can not even change on your own. The person that is accountable should do it. We keep the number of specs to a mininum – one basic scenario for each extension and when appropriate 1-2 (but no more) very specific scenarios for each extension. In the release_pack we prefer to has scenarios that involve more than one extension. In fact if there is a scenario that involves all the extensions we would probably use it.

    Integration specs are coupling the extensions?

    Yes. They are. When one extension fails the integration spec for all the extensions fail. That is true. With hundreds of extensions if every day a different extension “fails” then you will not have a successful run of the integration suite in years.

    But the customer “does not care”. The integration spec is the closes spec to the customer experience. The users never interacts with a single extension. They interact with all extensions.

    In the same time if an extension has reached the release_pack and is failing the release pack , we will go and add a new spec, but not in the release_pack. We add it into the repo for the specific extension. This protects us from regressions.

    Conclusion

    By having a small subset of integration specs in a project to which only I have access and that is the final step in the release pipeline we’ve managed to stop hundreds of releases that would break existing clients, would lose a feature or introduce a bug.

    587 official releases already and it takes 5 to 10 minutes to release the whole framework. Integration specs are present in the release_pack, but we keep them to a minimum, each testing many extensions at once and making sure that a real life client scenario is working.

     
  • kmitov 7:54 am on January 21, 2021 Permalink |
    Tags: , , , javascript, ,   

    How and why we test JavaScript – Jasmine and Teaspoon 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We are serious about specs. A good suite of specs has helped us many times especially when it comes to regressions. For the JavaScript part of our stack many of our specs and using Jasmine and ran with Teaspoon in the Browser environment. This article is about how we use Teaspoon and Jasmine to run our specs for pure JavaScript frameworks like Instructions Steps, and BABYLON.js. Both BuildIn3D and FLLCasts are powered by this specs, but we use them mostly for the Instructions Steps Framework.

    Why JavaScript specs?

    In the Rails community specs are important. In the JavaScript community – well, they are not that well respected. We can debate the reasons, but this is our observation.

    When we implement specs for the platforms we like to use Rails system specs. They give us exactly what we need. Let’s take a real life example.

    Filtering by category

    We visit the page at BuildIn3D for all the 3D models & building instructions. We filter them by brand. We click on a brand and see only the 3D models & instructions for this specific brand.

    Here is the spec. Short and sweet.

    scenario "/instructions shows the instructions and can be filted by category", js: true do
      # go to the third page as the materials are surely not there
      visit "/instructions?page=3" 
      expect(page).not_to have_link material1.title
    
      click_category category1
    
      # We make sure the url is correct
      expect(page).to have_current_path(/\/instructions\?in_categories%5B%5D=#{category1.id}$/, url: true)
    
      # We make sure that only material1 is shown and material2 is not shown
      # The materials are filtered by category
      expect(page).to have_link material1.title
      expect(page).not_to have_link material2.title
    endT

    The spec is a Rails system spec. Other articles enter into more details about them. The point is:

    With a Rails system spec we don’t concern ourselves with JavaScript.

    We visit the page, we click, we see that something has changed on the page – like new materials were shown and others were hidden.

    What’s the use case for JavaScript specs then?

    Take a look at the following embedded BuildIn3D instruction. It is coming live and it has a next button. Click on the next button.

    TRS the Turning Radio Satellite construction from GeoSmart and in 3D

    Here is the actual spec in JavaScript with Jasmine and ran everyday with Teaspoon.

    it("right arrow button triggers TriedMoveIterator with 1", function(done) {
        // This is a JavaScript spec that is inside the browser. It has access to all the APIs 
        // of the browser. 
        const eventRight = new KeyboardEvent("keydown", { key: "ArrowRight" });
        document.onkeydown(eventRight);
        
        // Wait for a specific event to occur in the Instructions Steps (IS) framework
        // We are inside the browser here. There is no communication with the server
    
        IS.EventsUtil.WaitFor(() => this.iteratorListener.event).then(() => {
          expect(this.iteratorListener.event.getSteps()).toEqual(1);
          done();
        });
      });
    

    This the how we use JavaScript and this is why we need them:

    We need JavaScript specs that are run inside the browser to communicate with other JavaScript objects and the browser APIs for JavaScript apps.

    In this specific case there is this “iteratorListener” that monitors how the user follows the instructions. We need access to it. Otherwise it gets quite difficult to test. We can not write a spec to test what are the pixels on the screen that are drawn after clicking next. This will be … difficult to put it mildly. We don’t have to do it. We need to know that clicking the next button has triggered the proper action which will then draw the actual Geometry and Colors on the screen.

    How we use Jasmine and Teaspoon to run the JavaScript specs

    Years ago I found a tool called Teaspoon. Looking back this has been one of the most useful tools we’ve used in our stack. It allows us to have a Rails project (and we have a lot of those) and to run JavaScript specs in this Rails project. The best thing – it just works (the Rails way).

    # add teaspoon dependency
    $ cd test/dummy/
    $ rails generate teaspoon:install
    # write a few specs
    $ rails s -p 8889
    

    You start a server, visit the url in the browser and the specs are executed

    Tests are pure JavaScript and we use Jasmine.

    That’s it. Not much to add. It’s simple The specs are a simple JS file located in “spec/javascripts/” folder and here is an example for one of them

    describe("IS.EventDef", function() {
      describe("constructing throws error if", function() {
        it("event name is null", function() {
          expect(() => new IS.EventDef(null, {})).toThrow(new Error("Null argument passed."));
        });
    
        it("event conf is null", function() {
          expect(() => new IS.EventDef("smoe", null)).toThrow(new Error("Null argument passed."));
        });
    
        it("declaring an event, but the class attribute value is null", function() {
          expect(() => {
            const ext = IS.ExtensionDef.CreateByExtensionConf({
              extension: new IS.Extension(),
              events: {
                declaredEvent: {
                  class: null
                }
              }
            });
          }).toThrow("Declared event 'declaredEvent' class attribute is null!");
        });
      });
    });
    

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel