Tagged: specs Toggle Comment Threads | Keyboard Shortcuts

  • kmitov 7:13 am on November 10, 2021 Permalink |
    Tags: , , , specs, ,   

    Migrating to jasmine 2.9.1 from 2.3.4 for teaspoon 

    We finally decided it is probably time to try to migrate to jasmine 2.9.1 from 2.3.4

    There is an error that started occurring randomly and before digging down and investigating it and and the end finding out that it is probably a result of a wrong version, we decided to try to get up to date with jasmine.

    Latest jasmine version is 3.X, but 2.9.1 is a huge step from 2.3.4

    We will try to migrate to 2.9.1 first. The issue is that the moment we migrated there is an error

    'beforeEach' should only be used in 'describe' function

    It took a couple of minutes, but what we found out is that fixtures are used in different ways.

    Here is the difference and what should be done.

    jasmine 2.3.4

    fixture.set could be in the beforeEach and in the describe

    // This works
    // fixture.set is in the describe
    describe("feature 1", function() {
      fixture.set(`<div id="the-div"></div>`);
      beforeEach(function() {
      })
    })
    // This works
    // fixture.set is in the beforeEach
    describe("feature 1", function() {
      beforeEach(function() {
        fixture.set(`<div id="the-div"></div>`);
      })
    })

    jasmine 2.9.1

    fixture.set could be only in the describe and not in the before beforeEach

    // This does not work as the fixture is in the beforeEach
    describe("feature 1", function() {
      beforeEach(function() {
        fixture.set(`<div id="the-div"></div>`);
      })
    })
    // This does work
    // fixture.set could be only in the describe
    describe("feature 1", function() {
      fixture.set(`<div id="the-div"></div>`);  
      beforeEach(function() {
        
      })
    })
     
  • kmitov 8:57 am on October 8, 2021 Permalink |
    Tags: amazon-s3, , specs   

    Sometimes you need automated test on production 

    In this article I am making the case that sometimes you just need to run automated tests against the real production and the real systems with real data for real users.

    The case

    We have a feature on one of our platforms:

    1. User clicks on “Export” for a “record”
    2. A job is scheduled. It generates a CSV file with information about the record and uploads on S3. Then a presigned_url for 72 hours is generated and an email is sent to the user with a link to download the file.

    The question is how do you test this?

    Confidence

    When it comes to specs I like to develop automated specs that give me the confidence that I deliver quality software. I am not particularly religious to what the spec is as long as it gives me confidence and it is not standing in my way by being too fragile.

    Sometimes these specs are model/unit specs, many times they are system/feature/integration specs, but there are cases where you just need to run a test on production against the production db, production S3, production env, production user, production everything.

    Go in a System/Integration spec

    A spec that would give me confidence here is to simulate the user behavior with Rails system specs.
    The user goes and click on the Export and I check that we’ve received an email and this email contains a link

      scenario "create an export, uploads it on s3 and send an email" do
        # Set up the record
        user = FactoryBot.create(:user)
        record = FactoryBot.create(:record)
        ... 
    
        # Start the spec
        login_as user
        visit "/records"
        click_on "Export"
        expect(page).to have_text "Export successfully scheduled. You will receive an email with a link soon."
    
        mail_html_content = ActionMailer::Base.deliveries.select{|email| email.subject == "Successful export"}.last.html_part.to_s
        expect(mail_html_content).to have_xpath "//a[text()='#{export_name}']"
        link_to_exported_zip = Nokogiri::HTML(mail_html_content).xpath("//a[text()='#{export_name}']").attribute("href").value
    
        csv_content = read_csv_in_zip_given_my_link link_to_exported_zip 
        expect(csv_content).not_to be_nil
        expect(csv_content).to include user.username
      end

    This spec does not work!

    First problem – AWS was stubbed

    We have a lot of other specs that are using S3 API. It is a good practice as you don’t want all your specs to touch S3 for real. It is slow and it is too coupled. But for this spec there was a problem. There was a file uploaded on S3, but the file was empty. The reason was that on one of the machines that was running the spes there was no ‘zip’ command. It was not installed and we are using ‘zip’ to create a zip of the csv files.

    Because of this I wanted to upload an actual file somehow and actually check what is in the file.

    I created a spec filter that would start a specific spec with real S3.

    # spec/rails_helper.rb
    RSpec.configure do |config|
      config.before(:each) do
        # Stub S3 for all specs
        Aws.config[:s3] = {
          stub_responses: true
        }
      end
    
      config.before(:each, s3_stub_responses: false) do
        # but for some specs, those that have "s3_stub_responses: false" tag do not stub s3 and call the real s3.
        Aws.config[:s3] = {
          stub_responses: false
        }
      end
    end

    `This allows us to start the spec

      scenario "create an export, uploads it on s3 and send an email", s3_stub_responses: false do
        # No in this spec S3 is not stubbed and we upload the file
      end

    Yes, we could create a local s3 server, but then the second problem comes.

    Mailer was adding invalid params

    In the email we are sending a presigned_url to the S3 file as the file is not public.
    But the mailer that we were using was adding “utm_campaign=…” to the url params.
    This means that the S3 presigned url was not valid. Checking if there is an url in the email was simply not enough. We had to actually download the file from S3 to make sure the link is correct.

    This was still not enough.

    It is still not working on production

    All the tests were passing with real S3 and real mailer in test and development env, but when I went on production the feature was not working.

    The problem was with the configuration. In order to upload to S3 we should know the bucket. The bucket was configured for Test and Development but was missing for production

    config/environments/development.rb:  config.aws_bucket = 'the-bucket'
    config/environments/test.rb:  config.aws_bucket = 'the-bucket'
    config/environments/production.rb: # there was no config.aws_bucket

    The only way I could make sure that the configuration in production is correct and that the bucket is set up correctly is to run the spec on a real production.

    Should we run all specs on a real production?

    Of course not. But there should be a few specs for a few features that should test that the buckets have the right permissions and they are accessible and the configuration in production is right. This is what I’ve added. Once a day a spec goes on the production and tests that everything works on production with real S3, real db, real env and configuration, the same way that users will use the feature.

    How is this part of the CI/CD?

    It is not. We do not run this spec before deploy. We run all the other specs before deploy that gives us 99% confidence that everything works. But for the one percent we run a spec once every day (or after deploy) just to check a real, complex scenario, involving the communication between different systems.

    It pays off.

     
  • kmitov 6:41 am on June 5, 2021 Permalink |
    Tags: , , , specs,   

    Yet another random failing spec 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    This article is about a random failing spec. I spent more than 5 hours on this trying to track it down so I decided to share with our team what has happened and what the stupid mistake was.

    Random failing

    Random failing specs are most of the time passing and sometimes failing. The context of their fail seems to be random.

    Context

    At FLLCasts.com we have categories. There was an error when people were visiting the categories. We receive each and every error on an email and some of the categories stopped working, because of a wrong sql query. After migration from Rails 6.0 to Rails 6.1 some of the queries started working differently mostly because of eager loads and we had to change them.

    The spec

    This is the code of the spec

     scenario "show category content" do
        category = FactoryBot.create(:category, slug: SecureRandom.hex(16))
        episode = FactoryBot.create(:episode, :published_with_thumbnail, title: SecureRandom.hex(16))
        material = FactoryBot.create(:material, :published_with_thumbnail, title: SecureRandom.hex(16))
        program = FactoryBot.create(:program, :published_with_thumbnail, title: SecureRandom.hex(16))
        course = FactoryBot.create(:course, :published_with_thumbnail, title: SecureRandom.hex(16))
    
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: episode, category: category)
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: material, category: category)
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: program, category: category)
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: course, category: category)
    
        expect(category.category_content_refs.count).to eq 4
        visit "/categories/#{category.to_param}"
    
        find_by_xpath_with_page_dump "//a[@href='/tutorials/#{episode.to_param}']"
        find_by_xpath_with_page_dump "//a[@href='/materials/#{material.to_param}']"
        find_by_xpath_with_page_dump "//a[@href='/programs/#{program.to_param}']"
        find_by_xpath_with_page_dump "//a[@href='/courses/#{course.to_param}']"
    
      end

    We add a few objects tot he category and then we check that we see them when we visit the category.

    The problem

    Sometime while running the spec only 1 of the objects in the category are shown. Sometimes non, most of the time all of them are shown.

    The debug process

    The controller

    def show
      @category_content_refs ||= @category.category_content_refs.published
    end

    In the category we just call published to get all the published content that is in this category. There are other things in the show but they are not relevant. We were using apply_scopes, we were using other concerns.

    The model

      scope :published, lambda {
        include_contents.where(PUBLISHED_OR_COMING_WHERE_SQL)
      }

    The scope in the model makes a query for published or coming.

    And the query, i kid you not, that was committed in 2018 and we’ve had this query for so long was

    class CategoryContentRef < ApplicationRecord
       
        PUBLISHED_OR_COMING_WHERE_SQL = ' (category_content_refs.content_type = \'Episode\' AND (episodes.published_at <= ? OR episodes.is_visible = true) ) OR
         (category_content_refs.content_type = \'Course\' AND courses.published_at <= ?) OR
         (category_content_refs.content_type = \'Material\' AND (materials.published_at <= ? OR materials.is_visible = true) ) OR
         category_content_refs.content_type=\'Playlist\'', *[Time.now.utc.strftime("%Y-%m-%d %H:%M:%S")]*4].freeze
    
    end
    

    I will give you a hit that the problem is with this query.

    You can take a moment a try to see where the problem is.

    The query problem

    The problem is with the .freeze and the constant in the class. The query is initialized when the class is loaded. Because of this it takes the time at the moment of loading the class and not the time of the query.

    Because the specs are fast sometimes the time of loading of the class is right before the spec and sometimes there are specs executed in between.

    It seems simple once you see it, but these are the kind of things that you keep missing while debugging. They are right in-front of your eyes and yet again sometimes you just can’t see them, until you finally see them and they you can not unsee them.

     
  • kmitov 6:44 am on March 30, 2021 Permalink |
    Tags: , , , , , , specs,   

    Dead code and one more way it could hit you back – by being loaded! 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We can all agree that dead code should be removed. What we sometimes fail to see is how much it could cost to leave dead code with our product. Today’s article is about an example of a “dead code” and how it hit us back 1.5 years after we stopped using it. It took us 3 hours to debug and find the root cause. The conclusion is that we should have done ‘git rm’ an year ago. We did not and the cost was 3 hours.

    This is a real life example article from our code base and it is design to share experience with our team, but I hope the whole community could benefit.

    What happened

    1. Jenkins build failed

    First time we saw it we identified that it is not something serious and it kept failing for a few days until we had time to address it in the next Sprint.

    2. Teaspoon, Jasmine, Bundler, Rails Engines

    In the project there was the production code in Rails.root along with a dummy test code for starting JavaScript specs in Rails.root/test/dummy/. We are using Teaspoon with Jasmine in a Rails engines. Rails engines tend to create a test/dummy folder in which you could put the code for a test app in which you want your JavaScript specs to be executed. The product code is loaded in the test app and specs are executed.

    About 1.5 years ago I did some experiments with this test app and made a few configurations. The goal was to have the production code along with some test app code both loaded in the test environment. In this way Teaspoon and Jasmine would see both the production and the test app code as coming from “production”. I literally remembered what my idea was and what I was trying to achieve. At the end I decided not to use the test app code, but to have only the production code located in Rails.root be loaded in the test environment.

    But I forgot to remove the test app code from test/dummy/app/assets/javascripts/

    3. Teaspoon Error

    The following error occurred. Here “IS” is a JavaScript namespace part of the production code.

    jenkins@vpszap6s:~/jobs/is-core Build and Release/workspace/test/dummy$ xvfb-run -a bundle exec rake teaspoon --verbose
    Starting the Teaspoon server...
    Teaspoon running default suite at http://127.0.0.1:36051/teaspoon/default
    ReferenceError: IS is not defined
    
    

    4. The error and debugging process

    Not sure what the error was exactly I tried to identify the difference between my machine and the server machine where the tests was failing. This server machine is called “elvis”.

    1. Bundler version – my version was 2.2.11 and the elvis version was 2.2.14. Tried with upgrade, but it did not resolve the issue.
    2. Chrome driver – since the tests were executed on chrome I saw the chrome driver versions were different. Mine was 86. elvis was with 89. Synced them, the error was still occurring.
    3. Rake version – rake versions were the same
    4. Ruby version – the same
    5. Teaspoon and Jasmine versions – the same
    6. The OS is the same
    7. I could not find any difference between my env and the elvis env.

    Turns out the problem was in the loading order. elvis was loading test app code and then production code. My machine was loading production code and then test app code. I could not figure out why. The whole test app code was:

    // test/dummy/app/assets/javascripts/ext/dummy/dummy.js 
    IS.Dummy = {};

    The error that was occurring was:

    ReferenceError: IS is not defined

    5. Solution

    I was curious to continue debugging to see why the two machines were loading the code in a different order, but at the end decided against it. Just removed the dead code in test/dummy/app/assets/javascripts/ext/dummy/dummy.js which I only thought as dead, but it turned out it was affecting our project.

    6. Builds passing

    Finally the builds were passing.

    Conclusion

    Dead code my not be that dead after all. It might be loaded even if not used and loading order could differ. Better be safe and remove dead code at all. If we need it that much, we have it in the GIT repo history.

     
  • kmitov 7:48 am on February 3, 2021 Permalink |
    Tags: , , python, , , specs   

    Where is the redundancy? 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    I think calling a method thrее times is redundant. But then again, you have to balance. Today’s article is about a code review that at the end took like a few hours in total of different discussions and I believe it is important. This kind of things take time. Failed builds, difficult specs.

    The IRL example – Where is the redundancy? Is this DRY?

     @dataclass(frozen=True)
     class LdrawLine(abc.ABC):
    +    default_x: ClassVar[float] = 23
    +    default_y: ClassVar[float] = 45
    +    default_z: ClassVar[float] = 0
         """
             This is the abstract class which every LdrawLine should implement.
         """
    @@ -85,11 +88,19 @@ class LdrawLine(abc.ABC):
             elif line_args[1] == "STEP":
                 line_param = _Step()
             elif line_args[1] == "ROTSTEP":
    -            rostep_params = {
    -                "rot_x": float(line_args[2]),
    -                "rot_y": float(line_args[3]),
    -                "rot_z": float(line_args[4])
    -            }
    +            if line_args[2] == "END":
    +                rostep_params = {
    +                    "rot_x": LdrawLine.default_x,
    +                    "rot_y": LdrawLine.default_y,
    +                    "rot_z": LdrawLine.default_z,
    +                    "type": "ABS"
    +                }
    +            else:
    +                rostep_params = {
    +                    "rot_x": float(line_args[2]),
    +                    "rot_y": float(line_args[3]),
    +                    "rot_z": float(line_args[4])
    +                }
     
    

    This piece of code (along with a few other changes in the commit) were the root of a 2 hours discussion in the team. A spec failed because some things were int while we were expecting them to be float.

    Calling ‘float’ three times like this is redundant

    The reason I think like this is that if you have to change and would like to have a double value or an int value you would have to change the code in three places.

    Probably e better solution would be:

    +            if line_args[2] == "END":
    +                rostep_params = {
    +                    "rot_x": LdrawLine.default_x,
    +                    "rot_y": LdrawLine.default_y,
    +                    "rot_z": LdrawLine.default_z,
    +                    "type": "ABS"
    +                }
    +            else:
    +                rostep_params = {
    +                    "rot_x": line_args[2],
    +                    "rot_y": line_args[3],
    +                    "rot_z": line_args[4]
    +                }
                      # We are adding a loop to call the float
    +                for key in ["rot_x", "rot_y", "rot_z"]:
    +                     rotstep_params[key] = float(rotstep_params[key])
     
    

    We call float only once at a single place. Now we have to deal with the fact that we have “rot_x” as a variable in two places, yes, that is true, but this could easily be extracted and we can iterate over the rostep_params values. But now we have consistency.

    Is it harder to read? Probably it is a little harder. Instead of simple statements you now have a loop. So you lose something, but you gain something. A float function that is called at a single place.

    What are we doing with these rotsteps?

    ROTSTEP is a command in the LDR format. We support LDR for 3D building instructions. Here is one example with a FabBrix Monster that uses the LDR ROTSTEP as a command:

    FabBRIX Monsters, Cthulhu in 3D building instructions
     
  • kmitov 10:25 am on January 22, 2021 Permalink |
    Tags: , , Google Closure Compiler, , , specs   

    Quality in an event-driven plugin based browser framework. 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We’ve designed, developed from scratch and are running an event-driven plugin based browser framework that we call Instructions Steps (IS). It helps us visualize 3D models and building instructions on the BuildIn3D and the FLLCasts platforms. Currently it consists of hundreds of extensions separated in about ~50 repos with 587 releases. We’ve figured out a way to keep the quality of the whole framework at a really good level with almost no bugs and errors.

    This article is about how we are doing. The main purpose is to give an overview for newcomers to our team, but I hope the developers community could benefit from it as a whole.

    The IS architecture (for context)

    I will enter into the details about the architecture of IS in another article. For this article it is enough to say that IS consists of a really small core – 804 lines of code and a lot of extensions.

    There are many extension that are extending the framework. Most of them are under 200 lines. The framework is highly decoupled and “everything is an extension”. Look at the 3D model and building instruction below. The “previous” button is an extension. The “next” button is an extension. You could have “parts list”, “bill of materials”, play animations, fit and rotate the camera. These are all extensions. I took this idea for the way we were building plugins for Eclipse (many years ago).

    Sphere from Geosmart GeoSphere, but this time in 3D

    What is the problem with quality and how do we keep delivering a quality product?

    Event-driven plugin based architectures have many advantages – like decoupling the plugins which makes them more maintainable. It forces you to have clear API boundaries which also makes them more maintainable. But there are a lot of questions and drawbacks compared to a nice Monolith app. What should we test? Should we test a specific extension, or the repo or the extension as it is working with all of its direct dependencies. What kind of specs should we develop? Should we have small unit spec that tests the extension in an isolation or we should put all the hundreds of extensions and test them all together. How do we make make these decisions?

    Here are the few simple rules that we try to follow.

    Compilation and type checking with Google Closure Compiler in ADVANCE mode

    We use vanilla JavaScript. No TypeScript. There are reasons for this. Nothing against TypeScript actually. We use Google Closure Compiler to compile each and every extension.

    Here is an example of a declaration of an “interface method”

    /**
     * Loads the given url and returns a Promise that when resolved will provide the caller with a {@link IS.StepsTree.StepData}.
     *
     * @export
     * @param  {string|File} file - url to the file or a DOM File object to be loaded
     * @return {Promise} Promise that when resolved will provide a {@link IS.StepsTree.StepData} which is the root step
     */
    IS.StepsTree.IProvider.prototype.getStepsTree = function(file) {};

    Looking at the code we have the jsdoc annotations like “@param”, “@return”, “@export”. These are annotations that GCC understands and will check. It will check if the param is of the given type, it will check if the returned value is of the given type. It will check if the classes that implement this interface actually implement it.

    Google Closure Compiler (GCC) will check if we are trying to access properties and methods that are not available.

    As a general rule of thumb – compilers are strict. If they understand your code, and compile it, then your code fulfills a bare minimum of requirements.

    GCC has helped us a lot. It takes some time to get used to it and to learn all the annotations and how to use them and how to develop SDK and libraries that are compiled, but it pays off. I’ve previously shared about our experience with GCC. Here is one lecture that I gave a few times – https://github.com/thebravoman/google-closure-compiler-presentation/blob/main/gcc_presentation.md

    Each extension is tested in isolation only with its direct dependencies available

    The navigation extensions are located in the repo “is-navigation”. When we test the functionality of the “Next” button we don’t expect to also have the “Fullscreen” or the “Animations” extensions available.

    Each extension is tested automatically in isolation, because each extension should work on its own given that it is the only extension that is installed (and the direct dependencies of course). Which makes sense. We are building a framework, a platform. When we have a framework, platform or even OS we should be able to install one extension, app, or program and they should be able to work on their own.

    For testing the extensions we use Jasmine and Teaspoon and I wrote an article about how and why we do it.

    All extensions are tested together in the ‘release_pack’

    What teams building platforms and frameworks quickly find out is that all the extensions and apps can work separately, but there are a lot of cases where if you put them all together and install them, things start to get more difficult. An example are all the different problems different OS have. One program is affecting another program in an unpredictable way.

    So we’ve build the is-release_pack. What it does is to put all the extensions together and to run a few basic tests on all of them.

    It contains 1-2 specs that check that each extension is working in the general case and probably one or two very specific cases. All the other specific cases are tested in the extensions, not in the “release_pack”. We push everything we can to the specs of the specific extension, but we have a few “integration” specs that are in the is-release_pack. And it is beautiful.

    The downside of integration specs

    There is one major downside with integration specs:

    All of us, developers, are lazy when it comes to really building it right. Once we build the feature and we see that it is working after a day of work there is little motivation in us to spend the next 3 days on building it right. It just feels so go to have it working that you commit and move on to the next thing.

    When there is a problem and an integration spec is failing most of the time it is easier to go and “ease” the integration spec. Change the expects a bit. Modify them. Even remove them.

    Other times when we have to develop a specific spec for the specific extension it feels easier to develop an integration spec instead of 20 specific specs in the repo for the extension.

    Sooner or later you end up in one of these situations:

    1. There are no integration specs or they contain expects and assertions that can not validate that your product is working correctly.
    2. There are a ton of integration specs that are constantly and randomly failing from time to time. The “integration” specs suite also takes forever to pass as there are now so many “integration specs”.

    Both of this situations are highly undesirable.

    Resolving the downside of integration specs

    One thing I learn writing business plans when applying for different VC funding is RACI.

    There are people Responsible for the job, people Accountable for the job, people that could be Consulted and people that should be kept Informed.

    So who is Accountable for the delivery of the IS framework and for the framework working correctly with all extensions in the user browser?

    Ideally it should be one person. In our case – it is Me.

    We are all responsible for the implementation. But in a team one should be held Accountable if something is not working and not right. One is Accountable for not checking. This person could change of course, but at any given moment there is someone “starting the engine of the car” as it exists the factory. You should start the engine and make sure the car works. You are accountable for checking it. You might not be responsible if it does not start, but you are accountable for checking.

    With the is-release_pack we resolved this for us.

    Only the Accountable (Me in our case) has access to the is-release_pack and its specs. Nobody else. You can not add integration specs, you can not remove, you can not even change on your own. The person that is accountable should do it. We keep the number of specs to a mininum – one basic scenario for each extension and when appropriate 1-2 (but no more) very specific scenarios for each extension. In the release_pack we prefer to has scenarios that involve more than one extension. In fact if there is a scenario that involves all the extensions we would probably use it.

    Integration specs are coupling the extensions?

    Yes. They are. When one extension fails the integration spec for all the extensions fail. That is true. With hundreds of extensions if every day a different extension “fails” then you will not have a successful run of the integration suite in years.

    But the customer “does not care”. The integration spec is the closes spec to the customer experience. The users never interacts with a single extension. They interact with all extensions.

    In the same time if an extension has reached the release_pack and is failing the release pack , we will go and add a new spec, but not in the release_pack. We add it into the repo for the specific extension. This protects us from regressions.

    Conclusion

    By having a small subset of integration specs in a project to which only I have access and that is the final step in the release pipeline we’ve managed to stop hundreds of releases that would break existing clients, would lose a feature or introduce a bug.

    587 official releases already and it takes 5 to 10 minutes to release the whole framework. Integration specs are present in the release_pack, but we keep them to a minimum, each testing many extensions at once and making sure that a real life client scenario is working.

     
  • kmitov 7:54 am on January 21, 2021 Permalink |
    Tags: , , , , specs,   

    How and why we test JavaScript – Jasmine and Teaspoon 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We are serious about specs. A good suite of specs has helped us many times especially when it comes to regressions. For the JavaScript part of our stack many of our specs and using Jasmine and ran with Teaspoon in the Browser environment. This article is about how we use Teaspoon and Jasmine to run our specs for pure JavaScript frameworks like Instructions Steps, and BABYLON.js. Both BuildIn3D and FLLCasts are powered by this specs, but we use them mostly for the Instructions Steps Framework.

    Why JavaScript specs?

    In the Rails community specs are important. In the JavaScript community – well, they are not that well respected. We can debate the reasons, but this is our observation.

    When we implement specs for the platforms we like to use Rails system specs. They give us exactly what we need. Let’s take a real life example.

    Filtering by category

    We visit the page at BuildIn3D for all the 3D models & building instructions. We filter them by brand. We click on a brand and see only the 3D models & instructions for this specific brand.

    Here is the spec. Short and sweet.

    scenario "/instructions shows the instructions and can be filted by category", js: true do
      # go to the third page as the materials are surely not there
      visit "/instructions?page=3" 
      expect(page).not_to have_link material1.title
    
      click_category category1
    
      # We make sure the url is correct
      expect(page).to have_current_path(/\/instructions\?in_categories%5B%5D=#{category1.id}$/, url: true)
    
      # We make sure that only material1 is shown and material2 is not shown
      # The materials are filtered by category
      expect(page).to have_link material1.title
      expect(page).not_to have_link material2.title
    endT

    The spec is a Rails system spec. Other articles enter into more details about them. The point is:

    With a Rails system spec we don’t concern ourselves with JavaScript.

    We visit the page, we click, we see that something has changed on the page – like new materials were shown and others were hidden.

    What’s the use case for JavaScript specs then?

    Take a look at the following embedded BuildIn3D instruction. It is coming live and it has a next button. Click on the next button.

    TRS the Turning Radio Satellite construction from GeoSmart and in 3D

    Here is the actual spec in JavaScript with Jasmine and ran everyday with Teaspoon.

    it("right arrow button triggers TriedMoveIterator with 1", function(done) {
        // This is a JavaScript spec that is inside the browser. It has access to all the APIs 
        // of the browser. 
        const eventRight = new KeyboardEvent("keydown", { key: "ArrowRight" });
        document.onkeydown(eventRight);
        
        // Wait for a specific event to occur in the Instructions Steps (IS) framework
        // We are inside the browser here. There is no communication with the server
    
        IS.EventsUtil.WaitFor(() => this.iteratorListener.event).then(() => {
          expect(this.iteratorListener.event.getSteps()).toEqual(1);
          done();
        });
      });
    

    This the how we use JavaScript and this is why we need them:

    We need JavaScript specs that are run inside the browser to communicate with other JavaScript objects and the browser APIs for JavaScript apps.

    In this specific case there is this “iteratorListener” that monitors how the user follows the instructions. We need access to it. Otherwise it gets quite difficult to test. We can not write a spec to test what are the pixels on the screen that are drawn after clicking next. This will be … difficult to put it mildly. We don’t have to do it. We need to know that clicking the next button has triggered the proper action which will then draw the actual Geometry and Colors on the screen.

    How we use Jasmine and Teaspoon to run the JavaScript specs

    Years ago I found a tool called Teaspoon. Looking back this has been one of the most useful tools we’ve used in our stack. It allows us to have a Rails project (and we have a lot of those) and to run JavaScript specs in this Rails project. The best thing – it just works (the Rails way).

    # add teaspoon dependency
    $ cd test/dummy/
    $ rails generate teaspoon:install
    # write a few specs
    $ rails s -p 8889
    

    You start a server, visit the url in the browser and the specs are executed

    Tests are pure JavaScript and we use Jasmine.

    That’s it. Not much to add. It’s simple The specs are a simple JS file located in “spec/javascripts/” folder and here is an example for one of them

    describe("IS.EventDef", function() {
      describe("constructing throws error if", function() {
        it("event name is null", function() {
          expect(() => new IS.EventDef(null, {})).toThrow(new Error("Null argument passed."));
        });
    
        it("event conf is null", function() {
          expect(() => new IS.EventDef("smoe", null)).toThrow(new Error("Null argument passed."));
        });
    
        it("declaring an event, but the class attribute value is null", function() {
          expect(() => {
            const ext = IS.ExtensionDef.CreateByExtensionConf({
              extension: new IS.Extension(),
              events: {
                declaredEvent: {
                  class: null
                }
              }
            });
          }).toThrow("Declared event 'declaredEvent' class attribute is null!");
        });
      });
    });
    

     
  • kmitov 7:51 am on January 19, 2021 Permalink |
    Tags: , , , , , , specs   

    Same code – two platforms. With Rails. 

    (Everyday Code – instead of keeping our knowledge in an README.md let’s share it with the internet)

    We are running two platforms. FLLCasts and BuildIn3D. Both platforms are addressing entirely different problems to different sets of clients, but with the same code. FLLCasts is about eLearning, learning management and content management, while BuildIn3D is about eCommerce.

    What we are doing is running both platforms with the same code and this article is about how we do it. The main purpose is to give an overview for newcomers to our team, but I hope the community could benefit from it as a whole and I could get feedback and learn what others are doing.

    What do we mean by ‘same code’?

    FLLCasts has a route https://www.fllcasts.com/materials. Results are returned by the MaterialsController.

    BuildIn3D has a route https://platform.buildin3d.com/instrutions. Results are returned by the same MaterialsController.

    FLLCasts has things like Organizations, Groups, Courses, Episodes, Tasks which are for managing the eLearning part of the platform.

    BuildIn3D has none of these, but it has WebsiteEmbeds for the eCommerce stores to embed and put 3D building instructions and models on their E-commerce stores.

    We run the same code with small differences.

    Do we use branches?

    No, we don’t. Branches don’t work for this case. They are hard to maintain. We’ve tried to have an “fc_dev” branch and a “b3_dev” branch for the different platforms, but it gets difficult to maintain. You have to manually merge between the branches. It is true that Git has made merging quite easy, but still it is an “advanced” task and it is getting tedious when you have to do it a few times a day and to resolve conflicts almost every time.

    We use rails engines (gems)

    We are separating the platform in smaller rails engines.
    A common rails engine between FLLCasts and BuildIn3D is called fc-author_materials. It provides the functionality for and author to create a material both on FLLCasts and on BuildIn3D.

    The engine providing the functionality for Groups for the eLearning part of FLLCasts is called fc-groups. This engine is simply not installed on BuildIn3D, we install it only on FLLCasts.

    How does the Gemfile look like?

    Like this:

    install_if -> { !ENV.fetch('CAPABILITIES','').split(",").include?('--no-groups') } do
      gem 'fc-groups_enroll', path: 'gems/fc-groups_enroll'
      gem 'fc-groups', path: 'gems/fc-groups'
    end
    

    We call them “Capabilities”. By the default each platform is started with a “Capability” of having Groups. But we can disable them and tell the platform to start without Groups. When the platform starts the Groups are simply not there. We

    How about config/routes.rb?

    The fc-groups engine installs its own routes. This means that the main platform config/routes.rb is different from gems/fc-groups/config/routes.rb and the routes are installed only when the engine is installed.

    Another option is to have an if statement and to check for capabilities in the config/routes.rb. We still have to decide which is easier to maintain.

    Where do we keep the engines? Are they in a separate repo?

    We tried. We have a few of the engines in separate repos. With time we found out it is easier to keep them in the same repo.

    When the engines are in separate repos you have very strict dependencies between them. This proves to be useful but costs a lot in terms of development and creating a clear API between the engines. This could pay off when we would like to share the engines with the rest of the community like for example Refinery is doing. But, we are not there, yet. That’s why we found out we could spend the time more productively developing features instead of discussing which class goes where.

    With the case of all the rails engines in a single repo we have the mighty Monolith again, we have to be grown ups in the team and maintain it, but it is easier than having them in different repos.

    How do we configure the platforms?

    FLLCasts will send you emails from team [at] fllcasts [dot] com

    BuildIn3D will send you emails from team [at] buildin3d [dot] com

    Where is the configuration?

    The configuration is in the config/application.rb. The code looks exactly like this:

    platform = ENV.fetch('FC_PLATFORM', 'fc')
    if platform == 'fc'
      config.platform.sender = "team [at] fllcasts [dot] com"
    elsif platform == 'b3'
      config.platform.sender = "team [at] buildin3d [dot] com"
    

    When we run the platform we set and ENV variable called FC_PLATFORM. If the platform is “fc” this means “FLLCasts”. If the platform is “b3” this means “BuildIn3D”.

    In the config/environments/production.rb we are referring to Rails.application.config.platform.sender. In this way we have one production env for both platforms. We don’t have many production evns.

    Why not many production envs?

    We found out that if we have many production envs, we would also need many dev envs and many test envs and there will be a lot of duplication between them.

    That’s why we are putting the configuration in the application.rb. It’s about the application, not the environment.

    How do we deploy on heroku?

    First rule is – when you deploy one platform you also deploy the other platform. We do not allow different versions to be deployed. Both platforms are running the same code, always. Otherwise it gets difficult.

    When we deploy we do

    # In the same build we 
    # push the fllcasts app and then the buildin3d app
    git push fllcasts production3:master
    heroku run rake db:migrate --app fllcasts
    
    git push buildin3d production3:master
    heroku run rake db:migrate --app buildin3d

    In this way both platforms always share the same code, except for a short period of a few minutes between the deployment.

    How are the views separated?

    The platforms share a controller, but the views are different.

    The controller should return different views for the different platforms. Here is what the controller is doing.

    def show
       # If the platform is 'b3' return a different set of views
        if Rails.application.config.platform.id == "b3"
          render :template => Rails.application.config.platform.id+"/materials/show"
        else
          render :template => "/materials/show"
        end
      end

    In the same time we have the folders:

    # There are two separate folders for the views. One for B3 and one for FC 
    fc-author_materials/app/views/b3/materials/
    fc-author_materials/app/views/materials/

    How do we test?

    Testing proved to be challenging at first. Most of the time as the code is the same the specs should be the same right?

    Well, no. The code is the same, but the views are different. This means that the system specs are different. We write system and model specs and we don’t write views and controllers specs (if you are still writing views and controllers specs you should consider stopping. They were deprecated years ago).

    As the views are different the system specs are different.

    We tag the specs that are specifically for the BuildIn3D platform with a tag platform:b3

    
      context "platform is b3", platform: :b3 do
        before :each do
          expect_b3
        end

    When we run the specs we run first all the specs that are not specifically for b3 with

    $ rake spec SPEC="$specs_to_build" SPEC_OPTS="--tag ~platform:b3 --order random"

    Then we run a second suite for the tests that are specifically for the BuildIn3D platform.

    # Note that here we set an ENV and we use platform:b3 and not ~platform:b3
    $ FC_PLATFORM="b3" rake spec SPEC="$specs_to_build" SPEC_OPTS="--tag platform:b3 --order random"

    I see how this will become difficult to maintain if we start a third platform or a fourth platform, but we would figure it out when we get there. It is not something worth investing any resources into as we do not plan to start a new platform soon.

    Conclusion

    That’s how we run two platforms with the same code. Is it working? We have about 200 deployments so far in this way. We see that it is working.

    Is it difficult to understand? It is much easier than different branches and it is also much easier than having different repos.

    To summarize – we have a monolith app separated in small engines that are all in the same repo. When running a specific platform we install only the engines that we need. Controllers are the same, views could be different.

    I hope this was helpful and you can see a way to start a spin off of your current idea and to create a new business with the same code.

    There is a lot to be improved – it would be better to have each and every rails engine as a completely separate project in a different repo that we just include in the platform. But we still don’t have the requirement for this and it will require months of work on a 10 years old platform as ours. Once we see a clear path for it to pay off, we would probably do it in this way.

    For fun

    Thanks for stopping by and reading through this article. Have fun with this 3D model and building instructions.

    GeoShpere3 construction with GeoSmart set

     
  • kmitov 5:25 pm on January 18, 2021 Permalink |
    Tags: , , , , , , specs,   

    The benefits of running specs against nightly releases of dependencies. 

    Spend some time and resources to set up your Continuous Integration infrastructure to run your spec suites against nightly releases of your dependencies. The benefits are larger than the costs.

    Context

    To further explain the point I will use an example from today.

    We run our specs daily against the latest nightly release of BABYLON.js. On Friday one spec failed. I reported in the forum (not even a github issue). A few hours later there was a fix and PR merged with the main branch of BABYLON.js. We would have the new nightly in a day or two.

    Our specs pass with version 4.2.0 of BABYLON.js, but they fail with BABYLON 5.0.0-alpha.6. A few of the hundred of extensions running in the Instructions Steps (IS) Framework are using BABYLON.js. The IS framework is powering the 3D building instructions at FLLCasts and BuildIn3D.

    BABYLON.js provides two releases of their library.

    1. Stable – available on https://cdn.babylonjs.com/babylon.js
    2. Preview – available on https://preview.babylonjs.com/babylon.js

    How do we run the specs against the preview (nightly) release of BABYLON.js?

    We’ve configured Jenkins to do two builds. One is against the official release of BABYLON.js that we are using on production. The second run is against the preview release.

    When there is a problem in our code both builds will fail. When there is an issue with the new version of BABYLON.js only the second build fails.

    What is the benefit?

    I think of the benefit as “being in the context’. Babylon team is working on a new feature or they are changing something. If we find an issue with this change six months later it would be much more difficult for them to switch context and resolve it. Probably there are already other changes. But when we as developers are in the “context”, when we are working on something and have made a change today and there is an issue with this change it is much easier to see where the problem is. You are in the same context.

    The other large benefit is that when 5.0.0 is released we will know from day one that we support it and we can switch production to the new version. There are exactly 0 days for us to “migrate” to the new version.

    How much does it cost us?

    Basically – zero. The specs are run in under 60 seconds and the build is configured with a param.

    What if there are API changes?

    Yes, we can’t just run the same code if there are API changes in BABYLON.js. That’s why we have the branch. If there are API changes we can change our code in the babylon-5.0 branch and keep it up to date with changes in dev, which is most of the time resolved with a simple merge.

    But BABYLON.js is a stable library. There are not many API changes that are happening. At least not in the API that we are using.

    For fun

    As you are here, here is one instruction

    Large Spaceball from Geosmart Spaceball set in 3D
     
  • kmitov 6:18 am on November 30, 2020 Permalink |
    Tags: , , specs,   

    Testing an index page in a web application where we depend on the order. 

    [Everyday Code]

    Today the specs for an index page failed and I had to improve it. I decided to share a little trick for when we depend on the order of the objects.

    The specs is for the /subscriptions page where we show an index of all the subscriptions. We order the subscriptions by created_at in DESC. There are 20 subscriptions on the page. Then you must got to the next page. In the spec we open the page and check that there is a link to our subscription.

    visit "/admin/user/subscriptions"
    
    expect(page).to have_link subscription.random_id.to_s, href: "/admin/user/subscriptions/#{subscription.to_param}/edit"

    The problem was that there are other specs which for some reason create subscriptions into the future. This means that at a certain point in time when more than 20 subscriptions are created into the future in the DB, then our spec will fail. It will fail because the newly created subscription is on the second page.

    All the subscriptions on this page are into the future as today is 2020-11-30. So our newly created subscription is not here.

    What are our options?

    Move to the correct page in the spec

    This is an option. It will require us to have a loop in the spec that would more to the next page in the index and search for each subscription. This is too much logic for a spec.

    Delete all the future subscriptions before starting the spec

    Could be done. But is more logic for the spec. It actually needs to delete subscriptions and this is not the job of this specs.

    Create a subscription that is with the most recent created_at

    It is simple.

      let(:subscription) {FactoryBot.create(:subscription, created_at: subscription_created_at_time)}
    
      def subscription_created_at_time
        (Subscription.order(:created_at).last.try(:created_at) || Time.now)+1.second
      end
    
    

    It is the simpler change for this spec. Just create a subscription that’s last. In this way we know it will appear on the first page even when other specs have created subscriptions into the future.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel