Tagged: rails Toggle Comment Threads | Keyboard Shortcuts

  • kmitov 7:52 am on March 24, 2021 Permalink |
    Tags: , , , rails, ,   

    We don’t need ‘therubyracer’ and the libv8 error for compiling native extensions 

    This article is about the error:

    Installing libv8 3.16.14.19 with native extensions
    Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

    and the fact that we don’t need and probably you don’t need therubyracer.

    There are two solutions for the above error:

    # To install libv8 with a V8 without precompiling V8 
    $ gem install libv8 -v '3.16.14.19' -- --with-system-v8
    
    # Or to comment therubyracer in your Gemfile and not to use it.
    # gem 'therubyracer'

    We went for the later. Here are more details

    What is libv8 and therubyracer

    From libv8

    V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others.

    What therubyracer does is to embeds the V8 Javascript Interpreter into ruby.

    In Rails there is the execjs gem used in assets precompilations.

    ExecJS lets you run JavaScript code from Ruby. It automatically picks the best runtime available to evaluate your JavaScript program, then returns the result to you as a Ruby object.

    https://github.com/sstephenson/execjs

    This means that execjs needs a JS runtime and it uses the one provided by therubyracer, libv8 and V8. But it can use other runtimes as well, like Node.

    What is the libv8 compilation error?

    The libv8 error is because it can not compile V8 on the machine. V8 is developed in C++. To successfully compile it the machine must have all the dependencies needed for compilation. You can try to compile it from scratch or you could use an already compiled one. Compiling from scratch takes time to resolve all the dependencies.

    But we didn’t need therubyracer

    Turns out that in this specific platform we don’t need therubyracer, because we already have Node because of webpacker/webpack and we have it on all the machines and we have it on production on Heroku.

    therubyracer is also discouraged on Heroku –

    If you were previously using therubyracer or therubyracer-heroku, these gems are no longer required and strongly discouraged as these gems use a very large amount of memory.

    A version of Node is installed by the Ruby buildpack that will be used to compile your assets.

    https://devcenter.heroku.com/articles/rails-asset-pipeline#therubyracer

    therubyracer is great for starting quickly especially when there is no node. But considering there there was node on all of our machines we just don’t need therubyracer.

    As a result the slug size on heroku was reduced from 210M to 195.9M. No bad. I guess the memory consumption will also be reduced.

    remote: -----> Compressing...        
    remote:        Done: 210M        
    remote: -----> Launching...        
    remote:        Released v5616     
    
    remote: -----> Compressing...        
    remote:        Done: 195.9M        
    remote: -----> Launching...        
    remote:        Released v5617       

    How did we came about this error?

    Every Wednesday we do an automatic bundle update to see what’s new and update all the gems (of course if the tests pass). This Wednesday the build failed, because of the libv8 error and native compilation.

    It was just an unused dependency left in the Gemfile.

     
  • kmitov 7:36 pm on February 15, 2021 Permalink |
    Tags: , rails, ,   

    Usage of ActiveRecord::Relation#missing/associated 

    Today I learned that Rails 6.1 adds query method associated to check for the association presence. Good to know. I share it with the team at Axlessoft along with the community as a whole.

    While reading the article about the methods I thought:

    Are we going to use these methods in our code base?

    I did some greps and it turns out we won’t.

    $ git grep -A 10 joins | grep where.not | grep nil | wc -l
    0
    

    There is not a single place in our code that we call

    .joins(:association).where.not(association: {id: nil}
    
    or
    
    .joins(:association).where(association: {id: nil})

    What does this mean?

    Nothing actually. Great methods. I was just curious to see if we will be using them in our code base a lot.

     
  • kmitov 7:31 am on February 3, 2021 Permalink |
    Tags: rails,   

    Rails after_action order is in reverse 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We’ve used Rails after_action for controllers twice in our platforms and products. It allows you to execute something after a rails controller action.

    Why? What is the IRL use case?

    Users upload files for a 3D model or building instructions. We want to call an external service to convert this 3D model or buildin3d instructions and we schedule a job. We want to ping this external service. So we make a network request in the controller.

    This is never a good idea. Generally you want to return fast from the call to your controller action. On Heroku you have a 15 seconds time limit to return and as a user experience it is good to return a result right the way and to do your job on the background and to report on the progress with some clever JS.

    But for this specific case we have reasons to try to ping the external service. It generally takes about 0.5-1s, it is not a bottleneck for the user experience, and it works well. Except for the cases when it does not work well.

    Today was such a day. Heroku dynos were trying to connect with the service, but it was taking longer than 15 seconds. I don’t know why. So i decided to revise the implementation.

    What is special about after_action order?

    It is the reverse of before_action.

    We want to call the external service in an after action. In this way it is outside of the logic of the controller method and if it fails, we handle the exception in the after_action. It is a good separation. The real code is:

    module IsBackend::JenkinsWorkoff extend ActiveSupport::Concern
      included do
        after_action do
          unless @jenkins_rebuild == nil || @jenkins_rebuild.running?
            begin
              JenkinsClientFactory.force_jobs_workoff timeout: 10
            rescue => e
              config.logger.info e.message
            end
          end
        end
      end
    end
    

    The issue is with the order of the after_action when they are more then one.

    Generally before_action are executed in the order they are defined.

      before_action do
        # will be first
      end
    
      before_action except: [:index, :destroy] do
        # will be second
      end

    But after_action are executed in the reverse order

      after_action do
        # will be SECOND
      end
    
      after_action except: [:index, :destroy] do
        # will be FIRST
      end

    Why? I will leave the PR to explain it best – https://github.com/rails/rails/issues/5464

    How do we use it?

    When building the buildin3d instruction below we’ve pinged the external service in the build process. Because of the after_action we can enjoy this FabBrix dog.

    FabBRIX Pets, Dog in 3D building instructions
     
  • kmitov 5:30 pm on January 27, 2021 Permalink |
    Tags: , rails,   

    What is the most complex query that you are comfortable to live with before refactoring? 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    Today I did some refactoring and I again worked with a query that I kind of never got to refactor and it is the most “complex” query that we seem to be comfortable with in our platforms because other queries are improved, but this one stays the same. This does not mean that it is not understandable or not maintainable, it is just the most “complex” we seem to be comfortable with, because any more complex than this and we naturally think about refactoring it.

    This got me thinking.

    What are other real life examples for the most complex relation database queries others are comfortable enough to live with before thinking about changing the model or the scheme.

    Is my threshold too low or too high?

    I would be happy to learn more from you.

    Here is the query (Active Record) in question:

    Get all the references for 'Tasks' that are in CourseSections that are in Courses
    and filter all of them through the translations they have
    
    These are 8 tables (ContentRef, Task, CourseSection, Course, and 4 translation tables)

    # Get all the references for 'Tasks' that are in CourseSections that are in Courses 
    # and filter all of them through the translations they have
    # 
    # These are 8 tables (ContentRef, Task, CourseSection, Course, and 4 translation tables) 
    ContentRef.
      joins(:course_section). 
      includes(:translations).
      includes(:task)
      includes(task:[:translations]).
      includes(course_section: [course: [:translations]]).
      includes(course_section:[:translations]).
      where(courses:{id: course_ids }, content_type: "Task").
      select("content_refs.*, course_translations.*, course_section_translations.*, task_translations.*")
    
    # The models is
    Content Ref 
      belongs_to CourseSection
      belongs_to Task
      has_many :translations
    
    CourseSection
      belongs_to Course
      has_many :translations
    
    Course 
      has_many :translation
    
    Task 
      has_many :translation
    
     
  • kmitov 4:58 pm on January 25, 2021 Permalink |
    Tags: liquid, rails   

    How we use Liquid template language? (part 1) 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    In FLLCasts and BuildIn3D platforms we use three template languages. ERB (as it is all Rails), Liquid and Mustache. This article gives a brief overview of how we are using and extending Liquid in our platforms. The main purpose is to give an overview for newcomers to our team, but I hope the developers community could benefit from it as a whole.

    What is Liquid?

    Quoting directly from the site

    Liquid is an open-source template language created by Shopify and written in Ruby. It is the backbone of Shopify themes and is used to load dynamic content on storefronts.

    https://shopify.github.io/liquid/

    You create a template, you feed it values, it produces a result:

    @template = Liquid::Template.parse("hi {{name}}") # Parses and compiles the template
    @template.render('name' => 'tobi')                # => "hi tobi"

    Why Liquid?

    Short answer – I thought: “It is good enough for Shopify so It should be useful for us”

    With so many template engines it really is an interesting question why would we use Liquid.

    When I made the decision years ago I was facing the following problem.

    How to give admins the ability to create email templates where they write the id of an Episode. Before the email is sent, the template should evaluate what the title of this Episode is and fill it in the HTML.

    The problem with Episode titles was the following. An author creates an Episode and sets the title to “Line following with LEGO Mindstorms”. An admin prepares a message for all users that the new “Line following with LEGO Mindstorms” tutorial is released. While the admins prepare the email and include the title of the tutorial in the Email, the title of the tutorial is changed to “Line following with LEGO Mindstorms EV3” by the author. So we end up with two titles of the tutorial. One in the email, and one on the site. This happens with Pictures also.

    We are sending regular emails to subscribed users. As we are sending the emails I wanted to give the opportunity for admins to create a template of the email and then when the email is sent, the template is evaluated. The solution was to give admins the ability to create a “digest_message” in the platform and to fill this digest message with template values. Like for example:

    Hello, {{ user.name }},
    This is what we recommend you start with:
    {% episode = Episode.find(1) %}
    {{ episode.title }}
    {{ episode.destroy SHOULD NOT BE ALLOWED}}

    This was my goal. To get an instance of Episode, but without all the dangerous methods like “.destroy”. As admin I should still be able to call “episode.title”, but I should not be able to call “episode.destroy”.

    This was how we started with Liquid. This was the first time we needed.

    How do we use Liquid?

    The code above is close to Liquid, but it is not Liquid. You can not use Episode.find(1) in Liquid and this is good. I don’t want this method to be available for admins to write in the emails. Because they could also write other, more dangerous methods.

    Liquid gave us the solution – Liquid Drops

    Liquid::Drop

    The idea of Liquid::Drop is that you get a real object and you can call this object real methods while the template is evaluated. Here is our Episode drop

    module LiquidDrops
      class EpisodeDrop < Liquid::Drop
        
        def initialize episode
          @episode = episode
        end
    
        def title
          @episode.title.try(:html_safe)
        end
    
        def description
          @episode.description.try(:html_safe)
        end
    
        def published_at
          @episode.published_at.strftime('%d %B %Y')
        end
    
      end
    end
    

    Here is how we use this drop in a Liquid Template.

    {% assign episode = episodes.1189 %}
    {% assign title = episode.title %} 
    {% assign description = episode.description %}
    <article>
      <h3>
        {{ title }}
      </h3>
      <p> {{ description }}</p>
    </article>

    As you can see we get an instance of episode 1189 and we can then call this instance some specific methods. Like episode.title or episode.description. This is not the real ActiveRecord object for Episode. This is a new instance that is wrapping the ActiveRecord object and is delegating specific methods to it. Liquid would only call methods that are available in the Drop.

    The tricky part is to tell Liquid that it should connect ‘episodes.1189’ with the specific Drop class. We pass the drops as a context to template.render as:

    @drops['episodes'] = LiquidDrops::EpisodesDrop.new
    template = Liquid::Template.parse(template_content)
    result = template.render(@drops)

    In this way I’ve managed to solve our first challenge. Allow admins to write emails that can dynamically, when rendered, get the current name of the Episode without giving them access to more dangerous methods like Episode#destroy. I could have easily used ERB, but Liquid was such a good fit. It was designed exactly for this and Shopify were using it for their themes.

    Screenshot and life example

    If you register to FLLCasts (https://www.fllcasts.com) you will receive a couple of emails and all of them use Liquid.

    If you don’t want to subscribe here is a screenshot from a recently sent email for a new Course.

    Example from an email that was rendered with Liquid

    You see the picture, title and description. These are all evaluated and added to the HTML with the use of Liquid.

    Conclusion

    This concludes the first part of this articles. In the next parts I will take a look at how we build Filters and Tags and how and why do we use them.

     
  • kmitov 7:10 am on January 20, 2021 Permalink |
    Tags: , rails   

    ‘bundle update’ – why we run it regularly and automatically to update our dependencies? 

    (Everyday Code – instead of keeping our knowledge in an README.md let’s share it with the internet)

    Today it’s Wednesday and on Wednesday we have an automatic Jenkins job that would run ‘bundle update’. This will be 567-th time this build is run. It has saved us a lot of work and this article I would like to share why and how we automatically do ‘bundle update’

    What is ‘bundle update’?

    In the Ruby and Ruby on Rails world the dependencies are called gems. Gems could be installed with the ‘gem’ command. ‘bundle’ is a command provider by the Bundler tool. It automates this process. To quote from the Bundler web site

    Bundler provides a consistent environment for Ruby projects by tracking and installing the exact gems and versions that are needed.

    Bundler is an exit from dependency hell, and ensures that the gems you need are present in development, staging, and production. Starting work on a project is as simple as bundle install.

    https://bundler.io/

    With Bundler you can have two different projects working with two different sets of dependencies and Bundler will make sure the right dependencies are available for each project.

    What is ‘$ bundle install’?

    Both FLLCasts and BuildIn3D are Rails platforms. Before starting the platforms we should install all the dependencies. We use Bundler and the command is

    $ cd platform/
    $ bundle install

    ‘$ bundle install’ will make sure that all the gems that we are dependent on are available on our machine.

    FLLCasts and BuildIn3D for example are dependent on:

    # FLLCasts Gemfile contains
    gem "rails", '~> 6.0.3'
    gem "webpacker", "~> 5.0"
    gem 'turbolinks', '~> 5.2.0'

    This means that we are happy to work with any version of turbolinks that is 5.2.X. It could be 5.2.0, 5.2.1, 5.2.2 and so on. The last digit could change.

    ‘bundle install’ makes sure that you have a compatible version. But what if there is a new version of turbolinks – 5.2.5?

    What is ‘$ bundle update’?

    ‘$ bundle update’ for me is the real deal. It will check what new compatible versions are released for the gems we depend on. If we are currently using turbolinks 5.2.0 and there is 5.2.1 ‘bundle update’ will check this and will fetch the new version of turbolinks for us to use. It will write the specific versions that are used in the Gemfile.lock.

    For the turbolinks example the Gemfile.lock contains:

    # Gemfile.lock for FLLCasts and BuildIn3D
        turbolinks (5.2.1)
          turbolinks-source (~> 5.2)

    When should one run ‘$ bundle update’?

    That’s a thought question and a hard sale. Every team member on our team has at a certain point asked this question or made a statement like ‘But this is not automatic. I want to update dependencies when I want to update dependencies!” or has in some way struggle with ‘$ bundle update’. Myself included.

    The answer is:

    As the rails and ruby communities are live and vibrant things are improved quite often. One should run ‘$ bundle update’ regularly and one should do it automatically to make sure the dependencies are up to date.

    But updating the dependencies could break the platform!

    Automatic ‘bundle update’ could break any platform?

    Yes. It could. That’s why the are doing

    # Run bundle update
    $ bundle update
    
    # Commit the new result to a new branch
    $ git push elvis pr_fllcasts_update --force
    
    # Try to merge the new branch like any other branch. Run merge and then the specs
    $ git merge ...
    $ rake spec (on the merge)
    # If rake is successful continue with the merge. If it is not, then abort the merge. 

    We update and commit the changes to a new Branch (or create a Pull Request, depending on the project), and we then run the normal build.

    It will pull from this Branch (or PR) and will run the specs. If the specs pass successfully we continue with the merge.

    In this way we are constantly up to date

    What is the benefit of automatic ‘$ bundle update’?

    As I’ve written before the main benefit is the allocation of resources and keeping the developers context – https://kmitov.com/posts/the-benefits-of-running-specs-against-nightly-releases-of-dependencies/

    If a change in a gem breaks something that we are dependent on we can identify this in a week. We are not waiting months and we can report it now. The gem owners are probably still in the context of the change they’ve introduced and could faster understand and resolve the issue.

    Additionally we no longer have ‘migrates’ on our team. We are not dedicating any resource to ‘migrating to new dependencies’ as it happens automatically.

    #  Here are just the gems from Jan 13, 2021 7:00:00 AM
    Installing aws-partitions 1.416.0 (was 1.413.0)
    Installing chef-utils 16.9.20 (was 16.8.14)
    Installing autoprefixer-rails 10.2.0.0 (was 10.1.0.0)
    Installing omniauth 2.0.0 (was 1.9.1)
    Installing http-parser 1.2.3 (was 1.2.2) with native extensions
    Installing dragonfly 1.3.0 (was 1.2.1) Installing aws-sdk-core 3.111.0 (was 3.110.0)
    Installing omniauth-oauth2 1.7.1 (was 1.7.0)
    Installing aws-sdk-kms 1.41.0 (was 1.40.0)
    Installing globalize 5.3.1 (was 5.3.0) Installing money-rails 1.13.4 (was 1.13.3)
    

    Every Wednesday we update the gems and if there are problems we know at the middle of the week to be able to address them at the moment or to schedule them appropriately for the next weeks.

     
  • kmitov 7:51 am on January 19, 2021 Permalink |
    Tags: , , , rails, , ,   

    Same code – two platforms. With Rails. 

    (Everyday Code – instead of keeping our knowledge in an README.md let’s share it with the internet)

    We are running two platforms. FLLCasts and BuildIn3D. Both platforms are addressing entirely different problems to different sets of clients, but with the same code. FLLCasts is about eLearning, learning management and content management, while BuildIn3D is about eCommerce.

    What we are doing is running both platforms with the same code and this article is about how we do it. The main purpose is to give an overview for newcomers to our team, but I hope the community could benefit from it as a whole and I could get feedback and learn what others are doing.

    What do we mean by ‘same code’?

    FLLCasts has a route https://www.fllcasts.com/materials. Results are returned by the MaterialsController.

    BuildIn3D has a route https://platform.buildin3d.com/instrutions. Results are returned by the same MaterialsController.

    FLLCasts has things like Organizations, Groups, Courses, Episodes, Tasks which are for managing the eLearning part of the platform.

    BuildIn3D has none of these, but it has WebsiteEmbeds for the eCommerce stores to embed and put 3D building instructions and models on their E-commerce stores.

    We run the same code with small differences.

    Do we use branches?

    No, we don’t. Branches don’t work for this case. They are hard to maintain. We’ve tried to have an “fc_dev” branch and a “b3_dev” branch for the different platforms, but it gets difficult to maintain. You have to manually merge between the branches. It is true that Git has made merging quite easy, but still it is an “advanced” task and it is getting tedious when you have to do it a few times a day and to resolve conflicts almost every time.

    We use rails engines (gems)

    We are separating the platform in smaller rails engines.
    A common rails engine between FLLCasts and BuildIn3D is called fc-author_materials. It provides the functionality for and author to create a material both on FLLCasts and on BuildIn3D.

    The engine providing the functionality for Groups for the eLearning part of FLLCasts is called fc-groups. This engine is simply not installed on BuildIn3D, we install it only on FLLCasts.

    How does the Gemfile look like?

    Like this:

    install_if -> { !ENV.fetch('CAPABILITIES','').split(",").include?('--no-groups') } do
      gem 'fc-groups_enroll', path: 'gems/fc-groups_enroll'
      gem 'fc-groups', path: 'gems/fc-groups'
    end
    

    We call them “Capabilities”. By the default each platform is started with a “Capability” of having Groups. But we can disable them and tell the platform to start without Groups. When the platform starts the Groups are simply not there. We

    How about config/routes.rb?

    The fc-groups engine installs its own routes. This means that the main platform config/routes.rb is different from gems/fc-groups/config/routes.rb and the routes are installed only when the engine is installed.

    Another option is to have an if statement and to check for capabilities in the config/routes.rb. We still have to decide which is easier to maintain.

    Where do we keep the engines? Are they in a separate repo?

    We tried. We have a few of the engines in separate repos. With time we found out it is easier to keep them in the same repo.

    When the engines are in separate repos you have very strict dependencies between them. This proves to be useful but costs a lot in terms of development and creating a clear API between the engines. This could pay off when we would like to share the engines with the rest of the community like for example Refinery is doing. But, we are not there, yet. That’s why we found out we could spend the time more productively developing features instead of discussing which class goes where.

    With the case of all the rails engines in a single repo we have the mighty Monolith again, we have to be grown ups in the team and maintain it, but it is easier than having them in different repos.

    How do we configure the platforms?

    FLLCasts will send you emails from team [at] fllcasts [dot] com

    BuildIn3D will send you emails from team [at] buildin3d [dot] com

    Where is the configuration?

    The configuration is in the config/application.rb. The code looks exactly like this:

    platform = ENV.fetch('FC_PLATFORM', 'fc')
    if platform == 'fc'
      config.platform.sender = "team [at] fllcasts [dot] com"
    elsif platform == 'b3'
      config.platform.sender = "team [at] buildin3d [dot] com"
    

    When we run the platform we set and ENV variable called FC_PLATFORM. If the platform is “fc” this means “FLLCasts”. If the platform is “b3” this means “BuildIn3D”.

    In the config/environments/production.rb we are referring to Rails.application.config.platform.sender. In this way we have one production env for both platforms. We don’t have many production evns.

    Why not many production envs?

    We found out that if we have many production envs, we would also need many dev envs and many test envs and there will be a lot of duplication between them.

    That’s why we are putting the configuration in the application.rb. It’s about the application, not the environment.

    How do we deploy on heroku?

    First rule is – when you deploy one platform you also deploy the other platform. We do not allow different versions to be deployed. Both platforms are running the same code, always. Otherwise it gets difficult.

    When we deploy we do

    # In the same build we 
    # push the fllcasts app and then the buildin3d app
    git push fllcasts production3:master
    heroku run rake db:migrate --app fllcasts
    
    git push buildin3d production3:master
    heroku run rake db:migrate --app buildin3d

    In this way both platforms always share the same code, except for a short period of a few minutes between the deployment.

    How are the views separated?

    The platforms share a controller, but the views are different.

    The controller should return different views for the different platforms. Here is what the controller is doing.

    def show
       # If the platform is 'b3' return a different set of views
        if Rails.application.config.platform.id == "b3"
          render :template => Rails.application.config.platform.id+"/materials/show"
        else
          render :template => "/materials/show"
        end
      end

    In the same time we have the folders:

    # There are two separate folders for the views. One for B3 and one for FC 
    fc-author_materials/app/views/b3/materials/
    fc-author_materials/app/views/materials/

    How do we test?

    Testing proved to be challenging at first. Most of the time as the code is the same the specs should be the same right?

    Well, no. The code is the same, but the views are different. This means that the system specs are different. We write system and model specs and we don’t write views and controllers specs (if you are still writing views and controllers specs you should consider stopping. They were deprecated years ago).

    As the views are different the system specs are different.

    We tag the specs that are specifically for the BuildIn3D platform with a tag platform:b3

    
      context "platform is b3", platform: :b3 do
        before :each do
          expect_b3
        end

    When we run the specs we run first all the specs that are not specifically for b3 with

    $ rake spec SPEC="$specs_to_build" SPEC_OPTS="--tag ~platform:b3 --order random"

    Then we run a second suite for the tests that are specifically for the BuildIn3D platform.

    # Note that here we set an ENV and we use platform:b3 and not ~platform:b3
    $ FC_PLATFORM="b3" rake spec SPEC="$specs_to_build" SPEC_OPTS="--tag platform:b3 --order random"

    I see how this will become difficult to maintain if we start a third platform or a fourth platform, but we would figure it out when we get there. It is not something worth investing any resources into as we do not plan to start a new platform soon.

    Conclusion

    That’s how we run two platforms with the same code. Is it working? We have about 200 deployments so far in this way. We see that it is working.

    Is it difficult to understand? It is much easier than different branches and it is also much easier than having different repos.

    To summarize – we have a monolith app separated in small engines that are all in the same repo. When running a specific platform we install only the engines that we need. Controllers are the same, views could be different.

    I hope this was helpful and you can see a way to start a spin off of your current idea and to create a new business with the same code.

    There is a lot to be improved – it would be better to have each and every rails engine as a completely separate project in a different repo that we just include in the platform. But we still don’t have the requirement for this and it will require months of work on a 10 years old platform as ours. Once we see a clear path for it to pay off, we would probably do it in this way.

    For fun

    Thanks for stopping by and reading through this article. Have fun with this 3D model and building instructions.

    GeoShpere3 construction with GeoSmart set

     
  • kmitov 5:25 pm on January 18, 2021 Permalink |
    Tags: , , , , rails, , ,   

    The benefits of running specs against nightly releases of dependencies. 

    Spend some time and resources to set up your Continuous Integration infrastructure to run your spec suites against nightly releases of your dependencies. The benefits are larger than the costs.

    Context

    To further explain the point I will use an example from today.

    We run our specs daily against the latest nightly release of BABYLON.js. On Friday one spec failed. I reported in the forum (not even a github issue). A few hours later there was a fix and PR merged with the main branch of BABYLON.js. We would have the new nightly in a day or two.

    Our specs pass with version 4.2.0 of BABYLON.js, but they fail with BABYLON 5.0.0-alpha.6. A few of the hundred of extensions running in the Instructions Steps (IS) Framework are using BABYLON.js. The IS framework is powering the 3D building instructions at FLLCasts and BuildIn3D.

    BABYLON.js provides two releases of their library.

    1. Stable – available on https://cdn.babylonjs.com/babylon.js
    2. Preview – available on https://preview.babylonjs.com/babylon.js

    How do we run the specs against the preview (nightly) release of BABYLON.js?

    We’ve configured Jenkins to do two builds. One is against the official release of BABYLON.js that we are using on production. The second run is against the preview release.

    When there is a problem in our code both builds will fail. When there is an issue with the new version of BABYLON.js only the second build fails.

    What is the benefit?

    I think of the benefit as “being in the context’. Babylon team is working on a new feature or they are changing something. If we find an issue with this change six months later it would be much more difficult for them to switch context and resolve it. Probably there are already other changes. But when we as developers are in the “context”, when we are working on something and have made a change today and there is an issue with this change it is much easier to see where the problem is. You are in the same context.

    The other large benefit is that when 5.0.0 is released we will know from day one that we support it and we can switch production to the new version. There are exactly 0 days for us to “migrate” to the new version.

    How much does it cost us?

    Basically – zero. The specs are run in under 60 seconds and the build is configured with a param.

    What if there are API changes?

    Yes, we can’t just run the same code if there are API changes in BABYLON.js. That’s why we have the branch. If there are API changes we can change our code in the babylon-5.0 branch and keep it up to date with changes in dev, which is most of the time resolved with a simple merge.

    But BABYLON.js is a stable library. There are not many API changes that are happening. At least not in the API that we are using.

    For fun

    As you are here, here is one instruction

    Large Spaceball from Geosmart Spaceball set in 3D
     
  • kmitov 9:32 pm on January 11, 2021 Permalink |
    Tags: acts_as_paranoid, , globalize-rails, , rails,   

    The problems with acts_as_paranoid and generally with Soft Delete of records in Rails 

    I am about to refactor and completely remove acts_as_paranoid from one of the models in our platform. There are certain issues that were piling up in the last few years and it is now already difficult to support it. As I am about to tell this to colleagues tomorrow I thought to first structure the summary in an article and directly send the article to the whole team.

    If you are thinking about using acts_as_paranoid and generally soft delete your records then this article could give you a good understanding of what to expect.

    Disclaimer: I am not trying to roast acts_as_paranoid here. I think it is a great gem that I’ve used successfully for years and it has helped me save people’s work where they accidentally delete something. It could be exactly what you need. For us it got too entangled and dependent on workarounds with other gems and I am planning to remove it.

    acts_as_paranoid VS Globalize – problem 1

    We have a model called Material. Material has a title. The title is managed by globalize as we are delivering the content in a number of languages.

    class Material < ApplicationRecord
      acts_as_paranoid
      translates :title
    end

    The problem is that Globalize knows nothing about acts_as_paranoid. You can delete a Material, and it should delete the translations, but when you try to recover the Material then there is an error because of how the translations are implemented and the order in which the translations and the Material are recovered. Which record should be recovered first? Details are at https://github.com/ActsAsParanoid/acts_as_paranoid/issues/166#issuecomment-650974373, but as a quote here

    Ok, then I think I see what happens: ActsAsParanoid tries to recover your translations but their validation fails because they are recovered before the main object. When you just call recover this means the records are just not saved and the code proceeds to save the main record. However, when you call recover!, the validation failure leads to the exception you see.

    In this case, it seems, the main record needs to be recovered first. I wonder if that should be made the default.

    I have work around this and I used it like this for an year

    module TranslatableSoftdeletable
      extend ActiveSupport::Concern
    
      included do
    
        translation_class.class_eval do
          acts_as_paranoid
        end
    
        before_recover :restore_translations
      
      end
    
      def restore_translations
        translations.with_deleted.each do |translation|
          translation.deleted_at = nil
          translation.save(validate: false)
        end
        self.translations.reload
      end
    
    end

    acts_as_paranoid VS Globalize – problem 2

    Let’s say that you delete a Material. What is the title of this deleted material?

    material = Material.find(1)
    material.destroy
    material.title == ?

    Remember that the Material has all of its translations for the title in a table that just got soft deleted. So the correct answer is “nil”. The title of the delete material is nil.

    material = Material.find(1)
    material.destroy
    material.title == nil # is true

    You can workaround this with

    material.translations.with_deleted.where(locale: I18n.locale).first.title

    But this is quite ugly.

    acts_as_paranoid VS cancancan

    The material could have authors. An author is the connection between the User and the Material. We are using cancancan for the controllers.

    In the controller for a specific author we show only the models that are for this author. We have the following rule:

    can [:access, :read, :update, :destroy], clazz, authors: { user_id: user.id }
    

    Here the problem is that you can only return non deleted records. If you would like to implement a Trash/Recycle Bin/Bin functionality for the Materials that you can not reuse the rule. The reason is the cancancan can not have can? and cannot? with an sql statement and the “authors: {user_id: user.id}” is an sql statement.

    What we could do is to use scopes

    # in the ability
    can [:access, :read, :update, :destroy], Material, Material.with_authors(user.id)
    
    # In the Material
    scope :with_authors, -> (user_ids) {
        joins(:authors).where(authors: {user_id: user_ids})
    }
    

    We move the logic for authorization from the ability to the model. I can live with that. It is not that outrageous. But it does not work, because the association with authors will return empty authors for the Material as the materials are also soft deleted.

    acts_as_paranoid vs cancancan – case 2

    When declaring load_and_authorize in the controller it will not fetch records that are already deleted. So you can not use cancancan to load in the controller a record that is deleted and you must load it yourself.

    class MaterialsTrashController < ApplicationController
    
      before_action only: [:show, :edit, :update, :destroy] do
        @content_picture = Material.only_deleted.find(params[:id])
      end
    
    
      load_and_authorize_resource :content_picture, class: "Material", parent: false
    
    end

    This was ugly, but as we had one trash controller it was kind of acceptable. But with a second one it got more difficult and deviated from the other controllers logic.

    acts_as_paranoid vs ContentPictures

    Every Material has many ContentPictures on our platform. There is a ContentPictureRef model. One of the pictures could be the thumbnail. Once you delete a material what is the thumbnail that we can show for this material?

    As the content_picture_refs relation as also soft deleted we should change the logic for returning the thumbnail. We change it from

    def thumbnail
        thumbnail_ref = self.content_picture_refs.where(content_picture_type:  :thumbnail).first
    end

    to

    def thumbnail
        thumbnail_ref = self.content_picture_refs.with_deleted.where(content_picture_type:  :thumbnail).first
    end

    I can live with this. Even relation that we have for the deleted record we must call with “with_deleted”. ContentPictures, Authors and other relations all should be changed for us to be able to see the deleted materials in a table that represents the Trash.

    acts_as_paranoid vs Active Record Callbacks

    There is another issue right at https://github.com/ActsAsParanoid/acts_as_paranoid/issues/168

    It took me hours to debug it a few months back. The order of before_destroy should not really matter.

    class Episode < ApplicationRecord
       acts_as_paranoid
       before_destroy do 
          puts self.authors.count
       end
       has_many :authors, dependent: :destroy
    
    class Episode < ApplicationRecord
       before_destroy do 
          puts self.authors.count
       end
       acts_as_paranoid 
       has_many :authors, dependent: :destroy

    acts_as_paranoid vs belongs_to

    By using acts_as_paranoid belongs_to should be:

    class Author < ApplicationRecord
       belongs_to :material, with_deleted: true
    end

    I could also live with this. At a certain stage it was easier to just add with_deleted: true to the belongs_to relations.

    acts_as_paranoid vs Shrine

    So the Material has a Shrine attachment stored on Amazon S3. What should happen when you delete the Material. Should you delete the file from S3? Of course not. If you delete the file you will not be able to recover the file.

    The solution was to modify the Shrine Uploader

    class Attacher < Shrine::Attacher
    
      def activerecord_after_destroy 
        # Just dont call super and keep the file
        # Sine objects at BuildIn3D & FLLCasts are softdeleted we want to 
        # handle the deletion logic in another way.
        true
      end
    
    end
    

    I was living with this for months and did not had many issues.

    What Pain and Problem are we addressing?

    The current problem that I can not find a workaround for is the MaterialsTrashController that is using CanCanCan. All the solutions would require for this controller to be different than the rest of the controllers and my biggest concern is that this will later result in issues. I would like to have a single place were we check if a User has access to the Material and whether they could read or update it or recover it. If we split the logic of the MaterialsController and the MaterialsTrashController we would end up with a hidden and difficult to maintain duplication.

    But was is the real problem that we want to solve?

    On our platform we have authors for the instructions and we work close with them. I imagine one particular author that I will call TM (Taekwondon Master). So TM uploads a material and from time to time he incidentally could delete a material. That’s it. When he deletes a Material it should not be deleted, but rather put in a trash. Then when he deletes it from the trash he must confirm with a blood sample. That’s it. I just want to stop TM from losing any Material by accident.

    The solution is pretty simple.

    In the MaterialsController just show all the materials that do not have a :deleted_at column set.

    In the MaterialsTrashController just show only the Materials with :delete_at controller.

    I can solve the whole problem with one simple filter that would take me like 1 minute to implement. We don’t need any of the problems above. They simply will not exist.

    That’s it. I will start with the Material and I will move through the other models as we are implementing the additional Trash controllers for the other models.

     
  • kmitov 10:10 am on December 10, 2020 Permalink |
    Tags: , rails,   

    Send array params to a rails server from a JavaScript code – URL and URLSearchParams 

    The goal was to filter building instructions on buildin3d.com by brand. We had to parse and create the URL on the client so I played around with URL and URLSearchParams (again). I will try to summarize the implementation here in the hope that you could help understand how URL and URLSearchParams work and how to use them with a Ruby on Rails app.

    Rails Server side request with array param

    The server accepts a request in the form:

    https://platform.buildin3d.com/instructions?in_categories[]=1&in_categories[]=2&in_categories[]=13

    This means we could pass an array with the ids of the categories. The result will return only the 3D assembly instructions that are for Brands in these categories.

    How to send the request on the client side

    On the client side we have an <ul> element with some <li> elements representing the categories.

    The brands filter is on the left.

    When we click on the brand we would like to add the brand to the URL and redirect the user to the new URL.

    So if the current url is

    https://platform.buildin3d.com/instructions?in_categories[]=1&in_categories[]=2

    and we select a new brand I would like to send the user to

    https://platform.buildin3d.com/instructions?in_categories[]=1&in_categories[]=2&in_categories[]=13

    How to add array params to the URL with JavaScript

    Here is the whole code of the Stimulus JS controller

    import { Controller } from "stimulus";
    
    /**
     * This is a controller used for filtering by brands [categories] on the materials index page
     *
     * @author Kiril Mitov
     */
    export default class extends Controller {
      static targets = ["tree"];
    
      connect() {
        console.log("connect");
        const scope = this;
        this.setFromUrl();
        this.treeTarget.addEventListener("click", e => {
          e.preventDefault();
          const li = e.target.closest("li");
          const input = li.querySelector("input");
          input.checked = !input.checked;
          scope.goToNewLocation();
        });
      }
    
      setFromUrl() {
        const url = new URL(window.location);
        const categoryIds = new URL(window.location).searchParams.getAll("in_categories[]")
    
        Array.from(this.treeTarget.querySelectorAll("li[data-category-id]"))
          .forEach(li => {
            const input = li.querySelector("input");
            const selected = categoryIds.indexOf(li.dataset["categoryId"]) != -1
            input.checked = selected;
          });
      }
    
      goToNewLocation() {
        const url = new URL(window.location)
        const searchParams = url.searchParams;
        searchParams.delete("in_categories[]");
        Array.from(this.treeTarget.querySelectorAll("li[data-category-id]"))
          .filter(li => li.querySelector("input").checked)
          .forEach(li => searchParams.append("in_categories[]",li.dataset["categoryId"]))
        searchParams.sort();
        window.location = url.toString();
      }
    }
    

    There are a few important things in the code

    Opening the page on a new location

    window.location = url.toString()

    This will open the new page for the user

    Adding the selected brands to the url search query

    We listen for an event of click from the user and we get a list of all the checked brands.

    goToNewLocation() {
        const url = new URL(window.location)
        const searchParams = url.searchParams;
        searchParams.delete("in_categories[]");
        Array.from(this.treeTarget.querySelectorAll("li[data-category-id]"))
          .filter(li => li.querySelector("input").checked)
          .forEach(li => searchParams.append("in_categories[]",li.dataset["categoryId"]))
        searchParams.sort();
        window.location = url.toString();
      }

    First we delete the param “in_categories[]”. We use URLSearchParams.delete. This removes the param from the searchParam and if we then call .toString() we would receive the new search query without this param.

    Then we call this.treeTarget.querySelectorAll to filter all the checkboxes and then only the check once and we append “in_categories[]” param for every selected checkbox. This is what the server requires.

    searchParams.append("in_categories[]",li.dataset["categoryId"])
    

    As a result with have the query

    https://platform.buildin3d.com/instructions?in_categories[]=1&in_categories[]=2

    Because we use only URLSearchParam.append and URLSearchParam.delete all the other params are still in the search query.

    As a summary:

    We have a Ruby on Rails server that accepts an array param and we form this array param in a Stimulus JS controller. We use URLSearchParams method to append and delete params, as this will preserve the other params that are already in the url.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel