Tagged: rails 6 Toggle Comment Threads | Keyboard Shortcuts

  • kmitov 5:48 am on September 6, 2021 Permalink |
    Tags: , , rails 6, ,   

    Refresh while waiting with RSpec+Capybara in a Rails project 

    This is some serious advanced stuff here. You should share it.

    A colleague, looking at the git logs

    I recently had to create a spec with Capybary+RSpec where I refresh the page and wait for a value to appear on this page. It this particular scenario there is no need for WebSockets or and JS. We just need to refresh the page.

    But how to we test it?

    # Expect that the new records page will show the correct value of the record
    # We must do this in a loop as we are constantly refreshing the page.
    # We need to stay here and refresh the page
    # 
    # Use the Tmeout.timeout to stop the execution after the default Capybara.default_max_wait_time
    Timeout.timeout(Capybara.default_max_wait_time) do
      loop do
        # Visit the page. If you visit the same page a second time
        # it will refresh the page.
        visit "/records"
        # The smart thing here is the wait: 0 param
        # By default find_all will wait for Capybara.default_max_wait_time as it is waiting for all JS methods 
        # to complete. But there is no JS to complete and we want to check the page as is, without waiting 
        # for any JS, because there is no JS. 
        # 
        # We pase a "wait: 0" which will check and return
        break if find_all(:xpath, "//a[@href='/records/#{record.to_param}' and text()='Continue']", wait: 0).any?
    
        # If we could not find our record we sleep for 0.25 seconds and try again.
        sleep 0.25
      end
    end

    I hope it is helpful.

    Want to keep it touch – find me on LinkedIn or Twitter.

     
  • kmitov 6:41 am on June 5, 2021 Permalink |
    Tags: , rails 6, , ,   

    Yet another random failing spec 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    This article is about a random failing spec. I spent more than 5 hours on this trying to track it down so I decided to share with our team what has happened and what the stupid mistake was.

    Random failing

    Random failing specs are most of the time passing and sometimes failing. The context of their fail seems to be random.

    Context

    At FLLCasts.com we have categories. There was an error when people were visiting the categories. We receive each and every error on an email and some of the categories stopped working, because of a wrong sql query. After migration from Rails 6.0 to Rails 6.1 some of the queries started working differently mostly because of eager loads and we had to change them.

    The spec

    This is the code of the spec

     scenario "show category content" do
        category = FactoryBot.create(:category, slug: SecureRandom.hex(16))
        episode = FactoryBot.create(:episode, :published_with_thumbnail, title: SecureRandom.hex(16))
        material = FactoryBot.create(:material, :published_with_thumbnail, title: SecureRandom.hex(16))
        program = FactoryBot.create(:program, :published_with_thumbnail, title: SecureRandom.hex(16))
        course = FactoryBot.create(:course, :published_with_thumbnail, title: SecureRandom.hex(16))
    
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: episode, category: category)
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: material, category: category)
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: program, category: category)
        category.category_content_refs << FactoryBot.create(:category_content_ref, content: course, category: category)
    
        expect(category.category_content_refs.count).to eq 4
        visit "/categories/#{category.to_param}"
    
        find_by_xpath_with_page_dump "//a[@href='/tutorials/#{episode.to_param}']"
        find_by_xpath_with_page_dump "//a[@href='/materials/#{material.to_param}']"
        find_by_xpath_with_page_dump "//a[@href='/programs/#{program.to_param}']"
        find_by_xpath_with_page_dump "//a[@href='/courses/#{course.to_param}']"
    
      end

    We add a few objects tot he category and then we check that we see them when we visit the category.

    The problem

    Sometime while running the spec only 1 of the objects in the category are shown. Sometimes non, most of the time all of them are shown.

    The debug process

    The controller

    def show
      @category_content_refs ||= @category.category_content_refs.published
    end

    In the category we just call published to get all the published content that is in this category. There are other things in the show but they are not relevant. We were using apply_scopes, we were using other concerns.

    The model

      scope :published, lambda {
        include_contents.where(PUBLISHED_OR_COMING_WHERE_SQL)
      }

    The scope in the model makes a query for published or coming.

    And the query, i kid you not, that was committed in 2018 and we’ve had this query for so long was

    class CategoryContentRef < ApplicationRecord
       
        PUBLISHED_OR_COMING_WHERE_SQL = ' (category_content_refs.content_type = \'Episode\' AND (episodes.published_at <= ? OR episodes.is_visible = true) ) OR
         (category_content_refs.content_type = \'Course\' AND courses.published_at <= ?) OR
         (category_content_refs.content_type = \'Material\' AND (materials.published_at <= ? OR materials.is_visible = true) ) OR
         category_content_refs.content_type=\'Playlist\'', *[Time.now.utc.strftime("%Y-%m-%d %H:%M:%S")]*4].freeze
    
    end
    

    I will give you a hit that the problem is with this query.

    You can take a moment a try to see where the problem is.

    The query problem

    The problem is with the .freeze and the constant in the class. The query is initialized when the class is loaded. Because of this it takes the time at the moment of loading the class and not the time of the query.

    Because the specs are fast sometimes the time of loading of the class is right before the spec and sometimes there are specs executed in between.

    It seems simple once you see it, but these are the kind of things that you keep missing while debugging. They are right in-front of your eyes and yet again sometimes you just can’t see them, until you finally see them and they you can not unsee them.

     
  • kmitov 3:19 pm on May 31, 2021 Permalink |
    Tags: , , rails 6   

    When caching is bad and you should not cache. 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    On Friday we did some refactoring at FLLCasts.com. We removed Refinery CMS, which is a topic for another article, but one issue pop-up – on a specific page caching was used in a way that made the page very slow. This article is about how and why. It is mainly for our team as a way to share the knowledge among ourselves, but I think the whole community could benefit, especially the Ruby on Rails community.

    TL;DR;

    When making a request to a cache service, be it MemCachir, Redis or any other, you are making a request to a cache service. This will include a get(key) method call and if the value is not stored in the cache, it will include a set(key) method call. When the calculation you are doing is simple it will take more time to cache the result from the calculation than to do the calculation again, especially if this calculation is a simple string concatenation.

    Processors (CPUs) are really good at string concatenation and could do them in a single digit milliseconds. So if you are about to cache something, make sure that you cache something worth caching. There is absolutely no reason to cache the result of:

    # Simple string concatenation. You calculate the value. No need to cache it.
    value = "<a href=#{link}>Text</a>". 
    
    # The same result, but with caching
    # There isn't a universe in which the code below will be faster than the code above.
    hash = calculate_hash(link)
    cached_value = cache.get(hash)
    if cached_value == nil
       cached_value = "<a href=#{link}>Text</a>". 
       cache.set(hash, cached_value)
    end 
    
    value = cached_value

    Context for Rails

    Rails makes caching painfully easy. Any server side generated HTML could be cached and returned to the user.

    <% # The call below will render the partial "page" for every page and will cache the result %>
    <% # Pretty simple, and yet there is something wrong %>
    <%= render partial: "page", collection: @pages, cached: true %>

    What’s wrong is that we open the browser and it takes more than 15 seconds to load.

    Here is a profile result from New Relic.

    As you can see there a lot of Memcached calls – like 10, and a lot of set calls. There are also a lot of Postgres find methods. All of this is because of how caching was set up in the platform. The whole “page” partial, after a decent amount of refactoring turns out to be a simple string concatenation as:

    <a href="<%= page.path%>"><%= page.title %></a>

    That’s it. We were caching the result of a simple string concatenation which the CPU is quite fast in doing. Because there were a lot of pages and we were doing the call for all of the pages, when opening the browser for the first time it just took too much to call all the get(key), set(key) methods and the page was returning a “Time out”

    Conclusion

    You should absolutely use caching and cache the values of your calculations, but only if those calculations take more time than asking the cache for a value. Otherwise it is just not useful.

     
  • kmitov 8:15 am on May 7, 2021 Permalink |
    Tags: , rails 6   

    [Rails] Crawlers you don’t want to cause a notification for an error 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We are using exception_notification gem. It is great because it will send us a direct email when there is an error on the platform. This means we are not looking at the logs. We get the notification directly on the email.

    But some of the crawlers (like all of them) cause errors, mainly in the cross-origin domain. The JavaScript we are delivering is not allowed for their domain and this generates an error. One way to stop these errors is to ignore errors caused crawlers in the exception_notifcation configuration:

    Rails.application.config.middleware.use ExceptionNotification::Rack,
        ignore_crawlers: %w{Googlebot bingbot YandexBot MJ12bot facebookexternalhit SEMrushBot AhrefsBot},
        :email => {
          :email_prefix => ...,
          :sender_address => ...,
          :exception_recipients => ...
        }

     
  • kmitov 8:05 am on May 7, 2021 Permalink |
    Tags: , rails 6   

    [Rails] Disabling Forgery Protection on API controllers 

    Forgery protection comes for free in Ruby on Rails and is described in the security guide – https://guides.rubyonrails.org/security.html#cross-site-request-forgery-csrf.

    You don’t want forgery protection on some API controllers. If the API controllers extend the ActionController::Base the forgery protection could be disabled in the following way. Here is an example for a controller accepting requests from Amazon AWS SNS.

    class SnsController < ActionController::Base
    
      # Disable it for this controller. 
      # If there is no session it is just null session
      protect_from_forgery with: :null_session
    
      http_basic_authenticate_with :name => ENV["SNS_USERNAME"], :password => ENV["SNS_PASSWORD"]
    
      ...
    end
    

    An even better approach would be not extending from ActionController::Base, but from ActionController::API. But then we would have to include the modules for HttpAuthentication which is a topic for another article.

     
  • kmitov 7:56 am on May 7, 2021 Permalink |
    Tags: , rails 6   

    [Rails] Please use symbols for polymorphic route arguments 

    This error occurred today with our platform. Issue is at https://github.com/rails/rails/issues/42157

    The issue occurs because of a new security patch with with Rails tracked with CVE-2021-22885.

    You can no longer call polymorphic_path or other dynamic path helpers with strings, because if this strings are provided by the user this could result in unwanted router helper calls.

    # previous call
    polymorphic_path([article, "authors"])
    # should now be
    polymorphic_path([article, "authors".to_sym])
    # or better
    polymorphic_path([article, :authors])

    Migration effort

    A perspective on how serious it is to upgrade – tl;dr – it is not – about half an hour.

    All the calls in our platform for polymorphic_path

    $ git grep  "polymorphic_path" | wc -l
    321

    All the file that have calls to polymorphic_path

    $ git grep -l  "polymorphic_path" | wc -l
    143

    Numbers of files that I’ve changed – 13
    Time it took me to review all of them -16 minutes, from 18:24 to 18:40

    Now I am waiting for specs to pass.

    I guess it is not a big change and could be migrate in half an hour. Most of our calls were using symbols already and only about 15 calls from 321 were with strings. These are 4% of all the calls.

     
  • kmitov 6:34 am on April 2, 2021 Permalink |
    Tags: , databases, , new relic, performance, , rails 6   

    Resolving a performance issue in a Rails platform. cancancan, New relic, N+1 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    Today’s article is about a performance issue that we had in one of our Rails platforms and how we resolved it. I enter into details of the specific root cause and how New relic was helpful in identifying the issue. This article is mainly targeted for our team, but I hope it would be useful for the community to see a practical example of cancancan and N+1.

    The root cause

    Every time we show a lesson on the FLLCasts platform we show the tutorials and materials of that lesson. But if the user does not have access to a tutorial, because it is not published or it requires a teachers subscription we do not show it in the lesson. We filter it from the lesson.

    We are using cancancan and we’ve added code for checking if every tutorial in the course could be accessed – whether you have the right subscription. This was not a problem on its own. The major problem was that we were showing a navigation of the course on the right that contains links to all the lessons with all the tutorials:

    FLLCasts Course nagivation showing links to all the tutorials
    Show of links to all the tutorials in the whole course in the navigation

    To show the navigation we looped through all the tutorials in the whole course and checked if the user has access to them. It was an N+1. This seems to be working when a course contains < 100 tutorials. But once the courses grew and contained more tutorials the logic got slow.

    People started noticing. We had to increase the number of Heroku dynos from time to time. Especially when a lot of students join and use the platform.

    There were two problems

    1. We don’t need to show all the tutorials and building instructions in the navigation. We just need to show them for the current lesson.
    2. We need to make a singe query to the DB that would get only the tutorials in that lesson and only the tutorials the user has access to.

    What was the performance impact

    During load when a lot of students were using the platform the navigation/_course_section partial was taking 41% of the show lesson request average of 5 seconds. There were about 100 find queries for a single lesson.

    Table with how expensive it was to show the navigation for large courses

    It was taking about 10-15 seconds for the server to respond and sometimes requests were failing.

    Chart showing slow responses
    Responses were slow. > 15 second sometimes.

    On the chart below we see how the moment people start opening courses the time for showing lessons (course_sections) starts growing.

    Chart showing sharp increase
    Sharp increase in time the moment we make requests

    Practically the platform was unusable in this case. Everything stopped.

    Solutions

    Change the navigation to include only the current lesson

    We decided that we don’t need to show all the tutorials from the whole course in the navigation, only the tutorials for the current lessons. This reduced the load because instead of returning 140 tutorials we were returning 20.

    Improve the call to cancancan

    One thing that we’ve missed is a call to cancancan that is can? :show for each record. It has slipped in the years and we haven’t noticed.

    records.each do |record|
       if can? :show, record
         ...
       end
    end

    I refactored it to use cancancan accessible_by method.

    @content_refs = @course_section
            .content_refs
            .accessible_by(current_ability)
            .order(:position)

    and added two new :index rules in the ContentRefs ability. The benefit is that in this way cancancan makes a single query to the db.

    module Abilities
      class ContentRefsAbility
        include CanCan::Ability
    
        def initialize user
          can :index, ContentRef, { archived: [nil,false], hidden: false}
          can :index, ContentRef, { archived: true, hidden: [true,false], course_section: { authors: { user_id: user.id }}}
          ...
          can :read, ContentRef, { archived: [nil,false], hidden: true ,course_section: {course: {required_access_for_hidden_refs: {weight: 0..user_max_plan_weight}}}}
          ...
        end
      end
    end

    The third rule is the deal breaker. With it we check if the user has a subscription plan that would allow them to access the content ref.

    With these three rules we allow everybody to see non archive and non hidden content_refs, we also allow authors of the course_section to see archived and hidden content_refs where they are the author of the course_section and we allow allow everybody to see hidden content_refs if they have the proper subscription plan.

    Results

    Number of timeouts reduced to from 180 to 6

    jenkins@elvis:/var/log/fllcasts$ grep "Rack::Timeout::RequestTimeoutException" access.log | wc -l
    6
    jenkins@elvis:/var/log/fllcasts$ zgrep "Rack::Timeout::RequestTimeoutException" access.log-20210320.gz | wc -l
    183
    jenkins@elvis:/var/log/fllcasts$ zgrep "Rack::Timeout::RequestTimeoutException" access.log-20210319.gz | wc -l
    164

    Increase in throughout put does not increase the required time

    It does not matter whether we have 1 or 40 rpm for this controller. The time stays the same. Next we can focus on how to make it even faster, but this will do until the platform grows x10

     
  • kmitov 7:36 pm on February 15, 2021 Permalink |
    Tags: , , rails 6,   

    Usage of ActiveRecord::Relation#missing/associated 

    Today I learned that Rails 6.1 adds query method associated to check for the association presence. Good to know. I share it with the team at Axlessoft along with the community as a whole.

    While reading the article about the methods I thought:

    Are we going to use these methods in our code base?

    I did some greps and it turns out we won’t.

    $ git grep -A 10 joins | grep where.not | grep nil | wc -l
    0
    

    There is not a single place in our code that we call

    .joins(:association).where.not(association: {id: nil}
    
    or
    
    .joins(:association).where(association: {id: nil})

    What does this mean?

    Nothing actually. Great methods. I was just curious to see if we will be using them in our code base a lot.

     
  • kmitov 7:31 am on February 3, 2021 Permalink |
    Tags: , rails 6   

    Rails after_action order is in reverse 

    (Everyday Code – instead of keeping our knowledge in a README.md let’s share it with the internet)

    We’ve used Rails after_action for controllers twice in our platforms and products. It allows you to execute something after a rails controller action.

    Why? What is the IRL use case?

    Users upload files for a 3D model or building instructions. We want to call an external service to convert this 3D model or buildin3d instructions and we schedule a job. We want to ping this external service. So we make a network request in the controller.

    This is never a good idea. Generally you want to return fast from the call to your controller action. On Heroku you have a 15 seconds time limit to return and as a user experience it is good to return a result right the way and to do your job on the background and to report on the progress with some clever JS.

    But for this specific case we have reasons to try to ping the external service. It generally takes about 0.5-1s, it is not a bottleneck for the user experience, and it works well. Except for the cases when it does not work well.

    Today was such a day. Heroku dynos were trying to connect with the service, but it was taking longer than 15 seconds. I don’t know why. So i decided to revise the implementation.

    What is special about after_action order?

    It is the reverse of before_action.

    We want to call the external service in an after action. In this way it is outside of the logic of the controller method and if it fails, we handle the exception in the after_action. It is a good separation. The real code is:

    module IsBackend::JenkinsWorkoff extend ActiveSupport::Concern
      included do
        after_action do
          unless @jenkins_rebuild == nil || @jenkins_rebuild.running?
            begin
              JenkinsClientFactory.force_jobs_workoff timeout: 10
            rescue => e
              config.logger.info e.message
            end
          end
        end
      end
    end
    

    The issue is with the order of the after_action when they are more then one.

    Generally before_action are executed in the order they are defined.

      before_action do
        # will be first
      end
    
      before_action except: [:index, :destroy] do
        # will be second
      end

    But after_action are executed in the reverse order

      after_action do
        # will be SECOND
      end
    
      after_action except: [:index, :destroy] do
        # will be FIRST
      end

    Why? I will leave the PR to explain it best – https://github.com/rails/rails/issues/5464

    How do we use it?

    When building the buildin3d instruction below we’ve pinged the external service in the build process. Because of the after_action we can enjoy this FabBrix dog.

    FabBRIX Pets, Dog in 3D building instructions
     
  • kmitov 9:32 pm on January 11, 2021 Permalink |
    Tags: acts_as_paranoid, , globalize-rails, , , rails 6   

    The problems with acts_as_paranoid and generally with Soft Delete of records in Rails 

    I am about to refactor and completely remove acts_as_paranoid from one of the models in our platform. There are certain issues that were piling up in the last few years and it is now already difficult to support it. As I am about to tell this to colleagues tomorrow I thought to first structure the summary in an article and directly send the article to the whole team.

    If you are thinking about using acts_as_paranoid and generally soft delete your records then this article could give you a good understanding of what to expect.

    Disclaimer: I am not trying to roast acts_as_paranoid here. I think it is a great gem that I’ve used successfully for years and it has helped me save people’s work where they accidentally delete something. It could be exactly what you need. For us it got too entangled and dependent on workarounds with other gems and I am planning to remove it.

    acts_as_paranoid VS Globalize – problem 1

    We have a model called Material. Material has a title. The title is managed by globalize as we are delivering the content in a number of languages.

    class Material < ApplicationRecord
      acts_as_paranoid
      translates :title
    end

    The problem is that Globalize knows nothing about acts_as_paranoid. You can delete a Material, and it should delete the translations, but when you try to recover the Material then there is an error because of how the translations are implemented and the order in which the translations and the Material are recovered. Which record should be recovered first? Details are at https://github.com/ActsAsParanoid/acts_as_paranoid/issues/166#issuecomment-650974373, but as a quote here

    Ok, then I think I see what happens: ActsAsParanoid tries to recover your translations but their validation fails because they are recovered before the main object. When you just call recover this means the records are just not saved and the code proceeds to save the main record. However, when you call recover!, the validation failure leads to the exception you see.

    In this case, it seems, the main record needs to be recovered first. I wonder if that should be made the default.

    I have work around this and I used it like this for an year

    module TranslatableSoftdeletable
      extend ActiveSupport::Concern
    
      included do
    
        translation_class.class_eval do
          acts_as_paranoid
        end
    
        before_recover :restore_translations
      
      end
    
      def restore_translations
        translations.with_deleted.each do |translation|
          translation.deleted_at = nil
          translation.save(validate: false)
        end
        self.translations.reload
      end
    
    end

    acts_as_paranoid VS Globalize – problem 2

    Let’s say that you delete a Material. What is the title of this deleted material?

    material = Material.find(1)
    material.destroy
    material.title == ?

    Remember that the Material has all of its translations for the title in a table that just got soft deleted. So the correct answer is “nil”. The title of the delete material is nil.

    material = Material.find(1)
    material.destroy
    material.title == nil # is true

    You can workaround this with

    material.translations.with_deleted.where(locale: I18n.locale).first.title

    But this is quite ugly.

    acts_as_paranoid VS cancancan

    The material could have authors. An author is the connection between the User and the Material. We are using cancancan for the controllers.

    In the controller for a specific author we show only the models that are for this author. We have the following rule:

    can [:access, :read, :update, :destroy], clazz, authors: { user_id: user.id }
    

    Here the problem is that you can only return non deleted records. If you would like to implement a Trash/Recycle Bin/Bin functionality for the Materials that you can not reuse the rule. The reason is the cancancan can not have can? and cannot? with an sql statement and the “authors: {user_id: user.id}” is an sql statement.

    What we could do is to use scopes

    # in the ability
    can [:access, :read, :update, :destroy], Material, Material.with_authors(user.id)
    
    # In the Material
    scope :with_authors, -> (user_ids) {
        joins(:authors).where(authors: {user_id: user_ids})
    }
    

    We move the logic for authorization from the ability to the model. I can live with that. It is not that outrageous. But it does not work, because the association with authors will return empty authors for the Material as the materials are also soft deleted.

    acts_as_paranoid vs cancancan – case 2

    When declaring load_and_authorize in the controller it will not fetch records that are already deleted. So you can not use cancancan to load in the controller a record that is deleted and you must load it yourself.

    class MaterialsTrashController < ApplicationController
    
      before_action only: [:show, :edit, :update, :destroy] do
        @content_picture = Material.only_deleted.find(params[:id])
      end
    
    
      load_and_authorize_resource :content_picture, class: "Material", parent: false
    
    end

    This was ugly, but as we had one trash controller it was kind of acceptable. But with a second one it got more difficult and deviated from the other controllers logic.

    acts_as_paranoid vs ContentPictures

    Every Material has many ContentPictures on our platform. There is a ContentPictureRef model. One of the pictures could be the thumbnail. Once you delete a material what is the thumbnail that we can show for this material?

    As the content_picture_refs relation as also soft deleted we should change the logic for returning the thumbnail. We change it from

    def thumbnail
        thumbnail_ref = self.content_picture_refs.where(content_picture_type:  :thumbnail).first
    end

    to

    def thumbnail
        thumbnail_ref = self.content_picture_refs.with_deleted.where(content_picture_type:  :thumbnail).first
    end

    I can live with this. Even relation that we have for the deleted record we must call with “with_deleted”. ContentPictures, Authors and other relations all should be changed for us to be able to see the deleted materials in a table that represents the Trash.

    acts_as_paranoid vs Active Record Callbacks

    There is another issue right at https://github.com/ActsAsParanoid/acts_as_paranoid/issues/168

    It took me hours to debug it a few months back. The order of before_destroy should not really matter.

    class Episode < ApplicationRecord
       acts_as_paranoid
       before_destroy do 
          puts self.authors.count
       end
       has_many :authors, dependent: :destroy
    
    class Episode < ApplicationRecord
       before_destroy do 
          puts self.authors.count
       end
       acts_as_paranoid 
       has_many :authors, dependent: :destroy

    acts_as_paranoid vs belongs_to

    By using acts_as_paranoid belongs_to should be:

    class Author < ApplicationRecord
       belongs_to :material, with_deleted: true
    end

    I could also live with this. At a certain stage it was easier to just add with_deleted: true to the belongs_to relations.

    acts_as_paranoid vs Shrine

    So the Material has a Shrine attachment stored on Amazon S3. What should happen when you delete the Material. Should you delete the file from S3? Of course not. If you delete the file you will not be able to recover the file.

    The solution was to modify the Shrine Uploader

    class Attacher < Shrine::Attacher
    
      def activerecord_after_destroy 
        # Just dont call super and keep the file
        # Sine objects at BuildIn3D & FLLCasts are softdeleted we want to 
        # handle the deletion logic in another way.
        true
      end
    
    end
    

    I was living with this for months and did not had many issues.

    What Pain and Problem are we addressing?

    The current problem that I can not find a workaround for is the MaterialsTrashController that is using CanCanCan. All the solutions would require for this controller to be different than the rest of the controllers and my biggest concern is that this will later result in issues. I would like to have a single place were we check if a User has access to the Material and whether they could read or update it or recover it. If we split the logic of the MaterialsController and the MaterialsTrashController we would end up with a hidden and difficult to maintain duplication.

    But was is the real problem that we want to solve?

    On our platform we have authors for the instructions and we work close with them. I imagine one particular author that I will call TM (Taekwondon Master). So TM uploads a material and from time to time he incidentally could delete a material. That’s it. When he deletes a Material it should not be deleted, but rather put in a trash. Then when he deletes it from the trash he must confirm with a blood sample. That’s it. I just want to stop TM from losing any Material by accident.

    The solution is pretty simple.

    In the MaterialsController just show all the materials that do not have a :deleted_at column set.

    In the MaterialsTrashController just show only the Materials with :delete_at controller.

    I can solve the whole problem with one simple filter that would take me like 1 minute to implement. We don’t need any of the problems above. They simply will not exist.

    That’s it. I will start with the Material and I will move through the other models as we are implementing the additional Trash controllers for the other models.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel