Updates from January, 2021 Toggle Comment Threads | Keyboard Shortcuts

  • kmitov 7:10 am on January 20, 2021 Permalink |
    Tags: ,   

    ‘bundle update’ – why we run it regularly and automatically to update our dependencies? 

    (Everyday Code – instead of keeping our knowledge in an README.md let’s share it with the internet)

    Today it’s Wednesday and on Wednesday we have an automatic Jenkins job that would run ‘bundle update’. This will be 567-th time this build is run. It has saved us a lot of work and this article I would like to share why and how we automatically do ‘bundle update’

    What is ‘bundle update’?

    In the Ruby and Ruby on Rails world the dependencies are called gems. Gems could be installed with the ‘gem’ command. ‘bundle’ is a command provider by the Bundler tool. It automates this process. To quote from the Bundler web site

    Bundler provides a consistent environment for Ruby projects by tracking and installing the exact gems and versions that are needed.

    Bundler is an exit from dependency hell, and ensures that the gems you need are present in development, staging, and production. Starting work on a project is as simple as bundle install.

    https://bundler.io/

    With Bundler you can have two different projects working with two different sets of dependencies and Bundler will make sure the right dependencies are available for each project.

    What is ‘$ bundle install’?

    Both FLLCasts and BuildIn3D are Rails platforms. Before starting the platforms we should install all the dependencies. We use Bundler and the command is

    $ cd platform/
    $ bundle install

    ‘$ bundle install’ will make sure that all the gems that we are dependent on are available on our machine.

    FLLCasts and BuildIn3D for example are dependent on:

    # FLLCasts Gemfile contains
    gem "rails", '~> 6.0.3'
    gem "webpacker", "~> 5.0"
    gem 'turbolinks', '~> 5.2.0'

    This means that we are happy to work with any version of turbolinks that is 5.2.X. It could be 5.2.0, 5.2.1, 5.2.2 and so on. The last digit could change.

    ‘bundle install’ makes sure that you have a compatible version. But what if there is a new version of turbolinks – 5.2.5?

    What is ‘$ bundle update’?

    ‘$ bundle update’ for me is the real deal. It will check what new compatible versions are released for the gems we depend on. If we are currently using turbolinks 5.2.0 and there is 5.2.1 ‘bundle update’ will check this and will fetch the new version of turbolinks for us to use. It will write the specific versions that are used in the Gemfile.lock.

    For the turbolinks example the Gemfile.lock contains:

    # Gemfile.lock for FLLCasts and BuildIn3D
        turbolinks (5.2.1)
          turbolinks-source (~> 5.2)

    When should one run ‘$ bundle update’?

    That’s a thought question and a hard sale. Every team member on our team has at a certain point asked this question or made a statement like ‘But this is not automatic. I want to update dependencies when I want to update dependencies!” or has in some way struggle with ‘$ bundle update’. Myself included.

    The answer is:

    As the rails and ruby communities are live and vibrant things are improved quite often. One should run ‘$ bundle update’ regularly and one should do it automatically to make sure the dependencies are up to date.

    But updating the dependencies could break the platform!

    Automatic ‘bundle update’ could break any platform?

    Yes. It could. That’s why the are doing

    # Run bundle update
    $ bundle update
    
    # Commit the new result to a new branch
    $ git push elvis pr_fllcasts_update --force
    
    # Try to merge the new branch like any other branch. Run merge and then the specs
    $ git merge ...
    $ rake spec (on the merge)
    # If rake is successful continue with the merge. If it is not, then abort the merge. 

    We update and commit the changes to a new Branch (or create a Pull Request, depending on the project), and we then run the normal build.

    It will pull from this Branch (or PR) and will run the specs. If the specs pass successfully we continue with the merge.

    In this way we are constantly up to date

    What is the benefit of automatic ‘$ bundle update’?

    As I’ve written before the main benefit is the allocation of resources and keeping the developers context – https://kmitov.com/posts/the-benefits-of-running-specs-against-nightly-releases-of-dependencies/

    If a change in a gem breaks something that we are dependent on we can identify this in a week. We are not waiting months and we can report it now. The gem owners are probably still in the context of the change they’ve introduced and could faster understand and resolve the issue.

    Additionally we no longer have ‘migrates’ on our team. We are not dedicating any resource to ‘migrating to new dependencies’ as it happens automatically.

    #  Here are just the gems from Jan 13, 2021 7:00:00 AM
    Installing aws-partitions 1.416.0 (was 1.413.0)
    Installing chef-utils 16.9.20 (was 16.8.14)
    Installing autoprefixer-rails 10.2.0.0 (was 10.1.0.0)
    Installing omniauth 2.0.0 (was 1.9.1)
    Installing http-parser 1.2.3 (was 1.2.2) with native extensions
    Installing dragonfly 1.3.0 (was 1.2.1) Installing aws-sdk-core 3.111.0 (was 3.110.0)
    Installing omniauth-oauth2 1.7.1 (was 1.7.0)
    Installing aws-sdk-kms 1.41.0 (was 1.40.0)
    Installing globalize 5.3.1 (was 5.3.0) Installing money-rails 1.13.4 (was 1.13.3)
    

    Every Wednesday we update the gems and if there are problems we know at the middle of the week to be able to address them at the moment or to schedule them appropriately for the next weeks.

     
  • kmitov 9:32 pm on January 11, 2021 Permalink |
    Tags: acts_as_paranoid, , globalize-rails, , ,   

    The problems with acts_as_paranoid and generally with Soft Delete of records in Rails 

    I am about to refactor and completely remove acts_as_paranoid from one of the models in our platform. There are certain issues that were piling up in the last few years and it is now already difficult to support it. As I am about to tell this to colleagues tomorrow I thought to first structure the summary in an article and directly send the article to the whole team.

    If you are thinking about using acts_as_paranoid and generally soft delete your records then this article could give you a good understanding of what to expect.

    Disclaimer: I am not trying to roast acts_as_paranoid here. I think it is a great gem that I’ve used successfully for years and it has helped me save people’s work where they accidentally delete something. It could be exactly what you need. For us it got too entangled and dependent on workarounds with other gems and I am planning to remove it.

    acts_as_paranoid VS Globalize – problem 1

    We have a model called Material. Material has a title. The title is managed by globalize as we are delivering the content in a number of languages.

    class Material < ApplicationRecord
      acts_as_paranoid
      translates :title
    end

    The problem is that Globalize knows nothing about acts_as_paranoid. You can delete a Material, and it should delete the translations, but when you try to recover the Material then there is an error because of how the translations are implemented and the order in which the translations and the Material are recovered. Which record should be recovered first? Details are at https://github.com/ActsAsParanoid/acts_as_paranoid/issues/166#issuecomment-650974373, but as a quote here

    Ok, then I think I see what happens: ActsAsParanoid tries to recover your translations but their validation fails because they are recovered before the main object. When you just call recover this means the records are just not saved and the code proceeds to save the main record. However, when you call recover!, the validation failure leads to the exception you see.

    In this case, it seems, the main record needs to be recovered first. I wonder if that should be made the default.

    I have work around this and I used it like this for an year

    module TranslatableSoftdeletable
      extend ActiveSupport::Concern
    
      included do
    
        translation_class.class_eval do
          acts_as_paranoid
        end
    
        before_recover :restore_translations
      
      end
    
      def restore_translations
        translations.with_deleted.each do |translation|
          translation.deleted_at = nil
          translation.save(validate: false)
        end
        self.translations.reload
      end
    
    end

    acts_as_paranoid VS Globalize – problem 2

    Let’s say that you delete a Material. What is the title of this deleted material?

    material = Material.find(1)
    material.destroy
    material.title == ?

    Remember that the Material has all of its translations for the title in a table that just got soft deleted. So the correct answer is “nil”. The title of the delete material is nil.

    material = Material.find(1)
    material.destroy
    material.title == nil # is true

    You can workaround this with

    material.translations.with_deleted.where(locale: I18n.locale).first.title

    But this is quite ugly.

    acts_as_paranoid VS cancancan

    The material could have authors. An author is the connection between the User and the Material. We are using cancancan for the controllers.

    In the controller for a specific author we show only the models that are for this author. We have the following rule:

    can [:access, :read, :update, :destroy], clazz, authors: { user_id: user.id }
    

    Here the problem is that you can only return non deleted records. If you would like to implement a Trash/Recycle Bin/Bin functionality for the Materials that you can not reuse the rule. The reason is the cancancan can not have can? and cannot? with an sql statement and the “authors: {user_id: user.id}” is an sql statement.

    What we could do is to use scopes

    # in the ability
    can [:access, :read, :update, :destroy], Material, Material.with_authors(user.id)
    
    # In the Material
    scope :with_authors, -> (user_ids) {
        joins(:authors).where(authors: {user_id: user_ids})
    }
    

    We move the logic for authorization from the ability to the model. I can live with that. It is not that outrageous. But it does not work, because the association with authors will return empty authors for the Material as the materials are also soft deleted.

    acts_as_paranoid vs cancancan – case 2

    When declaring load_and_authorize in the controller it will not fetch records that are already deleted. So you can not use cancancan to load in the controller a record that is deleted and you must load it yourself.

    class MaterialsTrashController < ApplicationController
    
      before_action only: [:show, :edit, :update, :destroy] do
        @content_picture = Material.only_deleted.find(params[:id])
      end
    
    
      load_and_authorize_resource :content_picture, class: "Material", parent: false
    
    end

    This was ugly, but as we had one trash controller it was kind of acceptable. But with a second one it got more difficult and deviated from the other controllers logic.

    acts_as_paranoid vs ContentPictures

    Every Material has many ContentPictures on our platform. There is a ContentPictureRef model. One of the pictures could be the thumbnail. Once you delete a material what is the thumbnail that we can show for this material?

    As the content_picture_refs relation as also soft deleted we should change the logic for returning the thumbnail. We change it from

    def thumbnail
        thumbnail_ref = self.content_picture_refs.where(content_picture_type:  :thumbnail).first
    end

    to

    def thumbnail
        thumbnail_ref = self.content_picture_refs.with_deleted.where(content_picture_type:  :thumbnail).first
    end

    I can live with this. Even relation that we have for the deleted record we must call with “with_deleted”. ContentPictures, Authors and other relations all should be changed for us to be able to see the deleted materials in a table that represents the Trash.

    acts_as_paranoid vs Active Record Callbacks

    There is another issue right at https://github.com/ActsAsParanoid/acts_as_paranoid/issues/168

    It took me hours to debug it a few months back. The order of before_destroy should not really matter.

    class Episode < ApplicationRecord
       acts_as_paranoid
       before_destroy do 
          puts self.authors.count
       end
       has_many :authors, dependent: :destroy
    
    class Episode < ApplicationRecord
       before_destroy do 
          puts self.authors.count
       end
       acts_as_paranoid 
       has_many :authors, dependent: :destroy

    acts_as_paranoid vs belongs_to

    By using acts_as_paranoid belongs_to should be:

    class Author < ApplicationRecord
       belongs_to :material, with_deleted: true
    end

    I could also live with this. At a certain stage it was easier to just add with_deleted: true to the belongs_to relations.

    acts_as_paranoid vs Shrine

    So the Material has a Shrine attachment stored on Amazon S3. What should happen when you delete the Material. Should you delete the file from S3? Of course not. If you delete the file you will not be able to recover the file.

    The solution was to modify the Shrine Uploader

    class Attacher < Shrine::Attacher
    
      def activerecord_after_destroy 
        # Just dont call super and keep the file
        # Sine objects at BuildIn3D & FLLCasts are softdeleted we want to 
        # handle the deletion logic in another way.
        true
      end
    
    end
    

    I was living with this for months and did not had many issues.

    What Pain and Problem are we addressing?

    The current problem that I can not find a workaround for is the MaterialsTrashController that is using CanCanCan. All the solutions would require for this controller to be different than the rest of the controllers and my biggest concern is that this will later result in issues. I would like to have a single place were we check if a User has access to the Material and whether they could read or update it or recover it. If we split the logic of the MaterialsController and the MaterialsTrashController we would end up with a hidden and difficult to maintain duplication.

    But was is the real problem that we want to solve?

    On our platform we have authors for the instructions and we work close with them. I imagine one particular author that I will call TM (Taekwondon Master). So TM uploads a material and from time to time he incidentally could delete a material. That’s it. When he deletes a Material it should not be deleted, but rather put in a trash. Then when he deletes it from the trash he must confirm with a blood sample. That’s it. I just want to stop TM from losing any Material by accident.

    The solution is pretty simple.

    In the MaterialsController just show all the materials that do not have a :deleted_at column set.

    In the MaterialsTrashController just show only the Materials with :delete_at controller.

    I can solve the whole problem with one simple filter that would take me like 1 minute to implement. We don’t need any of the problems above. They simply will not exist.

    That’s it. I will start with the Material and I will move through the other models as we are implementing the additional Trash controllers for the other models.

     
  • kmitov 9:56 pm on November 28, 2020 Permalink |
    Tags: , ,   

    How to do headless specs with the BABYLON JS NullEngine 

    [Everyday code]

    In buildin3d.com we are using BABYLON JS. To develop a headless specs for BABYLON JS that could run in a Node.js environment or without the need of an actual canvas we can use BABYLON.NullEngine. A spec could then look like

     const engine = new BABYLON.NullEngine();
     this.scene = new BABYLON.Scene(engine);

    Here is what I found out.

    Headless specs

    We run a lot of specs for our BABYLON JS logic. All of this specs are against preview.babylonjs.com. The preview version of babylon give us access to the latest most recent changes of babylon that are mode available to the public. These are still not release changes, but are the work of progress of the framework. They are quite stable so I guess at least the internal suite of BABYLON has passed. Preview is much like a nightly build. In BABYLON JS case it is also quite stable.

    2 days ago much of our specs failed. I reported at https://forum.babylonjs.com/t/failure-error-typeerror-cannot-read-property-trackubosinframe-of-undefined/16087. There were a lot of errors for :

    Cannot read property 'trackUbosInFrame' of undefined.

    Turns out that many of our specs were using BABYLON.Engine to construct the scene like

    const engine = new BABYLON.Engine();
    this.scene = new BABYLON.Scene(engine);

    The BABYLON.Engine class is only intended for the cases where we have a canvas. What we should have been doing for headless specs without a canvas is to use BABYLON.NullEngine. Hope you find it helpful.

    Write more specs.

     
  • kmitov 5:36 pm on November 17, 2020 Permalink |
    Tags: ,   

    Technical (code) Debt and how we handle it. 

    The subject of technical debt is interesting for me. I recently got a connection on linkedin offering me to help us identify, track and resolve technical debt and this compelled me to further write this article and to give some perspective on how we manage technical debt in our platforms and frameworks by specifically stopping on a few examples from the fllcasts.com and buildin3d.com platforms along with the Instructions Steps (IS) framework that we are developing. Hope it is useful for you all.

    What is technical debt?

    Here is the definition on the first source of wikipedia. It is pretty straightforward –

    Technical debt is a concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution.

    (https://www.techopedia.com/definition/27913/technical-debt)

    What does it look like?

    The way I think about technical debt is – we write certain structures like for example “if” which in many cases is a Technical Debt. Of course there are many other types, but I will stop at this one for the article.

    // In this example we do a simple if for a step  when visualizing 3d assembly instructions
    // if there is animation for the specific step created by the author and persisted in the file we play this animation, but if there is no animation in the file we create a default animation.
    if(step.hasAnimation()) {
     return new AnimationFromFile(step)
    } else { 
     return new DefaultAnimation(step)
    }

    The problem with technical dept here is that there later in the development of the project two new cases might arise.

    Case 1 – we want to provide the functionality for AnimationFromFile(step) only to paying customers, otherwise we return just the default animation. The code then becomes:

    // We check if the user is subscribed and only then provide the better experience
    if(step.hasAnimation() && customer.hasSubscription()) {
     return new AnimationFromFile(step)
    } else { 
     return new DefaultAnimation(step)
    }

    Why is this bad? Where is the debt? – the debt is that we are now coupling the logic for playing animation with the logic for customers that have subscriptions. This means that when there are changes to the subscription logic and API we must also change the logic for handling animations.

    Case 2 – we introduce a third type of animation that is only for users with a certain WebGL feature in their browser. The code becomes:

    // We check if the browser supports the WebGL feature in question and then return a FancyAnimation.
    if(step.hasAnimation() && customer.hasSubscription()) {
     return new AnimationFromFile(step)
    } else {
     if (webGlFeaturePresent()) {
       return new FancyAnimation();
     else { 
       return new DefaultAnimation(step)
     }
    }

    Now we have a logic that knows about creating DefaultAnimation, about reading from files, about what a subscription is and when are users subscribed and it also knows much about the browsers and their support for WebGL.

    At a certain point in time we would have to refactor this logic and separate it into more decoupled pieces. That is a technical debt.

    How was Technical Debt created (in the example above)?

    We took the easy path now by placing one more if in the logic, but we knew that at some point we would have to refactor the logic.

    Should we omit Technical Debt?

    I think a good architecture could prevent a lot of the technical debt that is occurring. Good decoupled architecture with small units with clear boundaries and no state will result in 0 technical debt and we should strive to create such systems. Practically the world is not perfect. In a team of engineers even if you spend all your time on fighting with Technical Debt it is enough for only one colleague at one instance to take the easy path and add one more if to “fix this in 5 minutes instead of 3 hours” and the Technical Debt is already there. You’ve borrowed from the future.

    How do we track technical debt?

    I have personally learned to live with some technical debt. If I now do

    $ git grep "FIXME" 

    in one of our platforms we would get 37 results. These are 37 places where we think we could have made the implementation better and we even have some idea how, but we’ve actively decided that it is not the time now for this. Probably this part of the code is not used often enough. Or we are waiting for specific requirements from specific clients to come and we would address this them – when there is someone to “pay for it”. Can we address it now? – but of course. It would take us two, three days, but the question is why? Why should we address this now. Would it bring us more customers, would it bring more value to the customers? It would surely make our life easier. So it is a balance.

    Our balance

    I can summarize our balance like this.

    1. We identify part of the code as a technical Debt (because we do regular code reviews).
    2. We try to look at “what went wrong” and understand how this could be implemented better. We might even try in a different branch but we do not spend that much resources on this.
    3. We then know “what when wrong” and we agree to be more careful and not to take debt the next time but to instead implement it in the right way the first time.
    4. After that we decide if it is worth it to refactor the current issue – are the new clients coming that would ask us for modifications on these parts of the code?

    That’s it.

    Simple “FIXME”, “TODO”, “NOTE”, “IMPORTANT”, “SECURITY” tags in the code, git grep to see where we are and balance with trying to learn how to do it correctly next time.

    How can we solve Technical Debt for the example above?

    In buildin3d we have a framework with an event-driven plugin architecture. So for us it was simply a matter of registering a different plugin for the different features.

    // Pseudo code is 
    framework.register(new FancyAnimation())
    framework.register(new AnimationFromFile()) 
    framework.register(new DefaultAnimationExtension())
    
    framework.ariveOnStep((step)=> {
      ...ask all the extensions for animation and play the first animation that is returned
    }})

    The question is at which stage do you invest in a framework.

    What about MVP(s)?

    The greater balance is sometimes between an MVP and a working product. On one occasion we had an open source tool that was doing exactly what we needed. It was converting one 3D file format to a different 3D file format. We started the project. We used the open source tool. We delivered a working MVP in about a month and we took a lot of debt, because this tool came with other dependencies and was clearly not developed to be supported and extended. It was clear from the beginning that once new client requirements started coming we would have to re-write almost everything. And we waited. We waited for about 2 years. For 2 years we were extending the initial implementation and one day a client came with a requirement that we could no longer support. Then, it took us about 6 months to re-write the whole implementation in a completely new, much more extensible way and could easily accommodate new requirements.

    Conclusion

    Try not to take technical debt.

    If you have to then at least try to learn why it happens and learn how not to do it in the future. You will exponentially become better.

    Write down a comment in the code about why do you think this is a debt and how it should be approached. Spend some time reviewing and resolving debts if it pays off.

     
  • kmitov 6:48 am on November 2, 2020 Permalink |
    Tags: , ,   

    A week ago I gave a nice lecture about Google Closure Compiler and how to use it in ADVANCED_OPTIMIZATION mode. It is available in Bulgarian at https://softuni.bg/trainings/3194/advance-javascript-compilation-with-google-closure-compiler-why-and-how

     
  • kmitov 6:57 pm on October 31, 2020 Permalink |
    Tags: startup   

    From MVP to first support Ticket – 1 month 

    1 month. That’s how long it took for clients to move from “let’s try your service” to “I need support. Why is this not working for my very special case? I love it and I want this service!”

    1 month. That’s how long it took from starting the servers that are running BuildIn3D (https://www.buildin3d.com) to the first support request.

    Users come and try the service. Upload a few hundred instructions. Errors occur and we fix them but no user ever sends a support request. It took one month until someone is brave enough to ask for support. I guess that in the begging people just don’t believe your service is actually working and it takes about a month for them to get from “let’s just try it out” to “I demand this should work. I need it.”.

     
  • kmitov 4:25 pm on October 23, 2020 Permalink |
    Tags: , mandarin, utf, wget   

    "wget 1.20 is here!" (never in my life I though I'd say that) 

    (this particle is part of the Everyday Code series)

    There are tools that just work. Low level tolls doing much of the heavy lifting and one thing you know about them is that they work. You never write

    
    $ mv --version source target
    

    For that matter you also never write

    $ cp -- version 1.3 source target
    $ ssh --version 1.2.1 use@machine
    $ curl --version 773.2  url

    Same applies for wget. There are tools that always work. Because they do one thing and they do it well. Not that there are no version. There are. But today was the first time in my career that we had to upgrade a wget version. We moved from version 1.17 to 1.20

    Why the change

    What could have changed that made it important to update from 1.17 to 1.20?

    It was the introduction of support of some mandarin symbols. Mandarin. A language. Users were uploading files names with such symbols and we had to support them.

    The internet. What a beautiful place.

     
  • kmitov 5:38 am on October 19, 2020 Permalink |
    Tags: management,   

    Don’t fix the issue in the software. Improve the process. 

    Yesterday one of the features on our platform did not work. I was in a meeting, demonstrating it over a shared screen and talking with a potential client. I went to the page showing the IS Editor in our buildin3d.com platform and the editor for editing the assembly instructions did not start. A little rush of embarrassment and a few milliseconds later I knew what I had to do. Thanks to my seniority and extended experience in the world of web development I moved my fingers lighting fast on the keyboard and I refreshed the page. The editor started. The demonstration continue.

    I could remember that I stumbled upon this issue a few days earlier and I saw that the IS Editor was not loading when you first visit the page. The meeting continue, I said something like “Sometimes when we are sharing the screen my bandwidth is small so we have to wait”. I suppose the client did not exactly understood what has just happen, but what I know is that the next time they try it on their side it will not work and they will be disappointed.

    Right after the meeting there was a problem I was facing. Should I now open the repo and start debugging or should I wait a day or two for our team to look at this.

    One of the most difficult things running a Software company as a good software developer is the patience to wait for the team of developers to resolve an issue.

    I was close to mad. How difficult could it be? After you commit something just go to the platform and see that it works. We have a lot of automation, a lot of testing and spec that have helped us a lot. We have a clean and I would say quite fast process for releasing a new version of any module to the platform. It takes anywhere from 2 minutes to about 20 minutes depending on what you are releasing. So after you release something just go and see and test and try it and make sure it works. How difficult could it be?

    I was mad. Like naturally and really mad. Not that this demonstration was almost ruined by this issue. I was mad that we’ve spend about 3-4 months working on this editor and it currently does not start. It is not true that the editor itself is not working. It is just not starting. Once it starts it works flawlessly, but a mis-configuration in the way it is started prevents it from even starting.

    It’s like getting to your Ferrary and it does not start because of law battery on your key or something. There is nothing wrong with the Ferrary itself, but your key is not working.

    In this state of anger I opened up the repo. I tracked down the moment it was introduced. And here is the dilemma:

    1. Should I now start debugging it, and resolving it?
    2. Should I just revert the last 11 days of commits and return the platform to a previous state completely removing the great improvements we’ve introduced in this last 11 days?
    3. Should I leave it for the next few days for the team to look at?

    The worst part is that I can fix the issue myself. But that is not my job. My team counts on me to spend more of my time with potential & existing clients, talking and discussing with them. Looking for ways they could integrate us. But in the same time I had an issue where a major feature is not working and will not work for the next few days and in one sleepless night I could resolve it.

    I don’t have this problem with the other departments. When there is an issue with some of the 3D product animations and models or there is an issue with some of the engineering designs I do not feel the urge to go and resolve this issue. I have the patience to rely on the team for this. Basically because I lack the knowledge and the tools to resolve such issues.

    Years ago when we were starting with 3D animations and models I had great interest, but I openly refused to install any software about 3D animations and models on my machine. I knew myself and I knew my team. In school an in university and was trying some 3D models and animations and it felt great. I learned a lot and I had some great time working on such projects. So I knew that if I install some of the software on my machine there will be issue that will come to me, but that was not my role in my organization.

    Same for engineering. I have the complete patience to wait for days for an engineering design task to complete. I never start the SOLIDWORKS myself and go on and “fix the things”. I could. I just don’t want as it will distract me from other important things and I know I can count on the engineers to do it.

    But with software it is always a little difficult. Not that I can not delegate. I can. There are large parts of the code we are running that I have never touched, or changed or anything. So I though – why was this particular issue different? What was my problem? Why was it bothering me? Why was this different from any other issue in software development that is reported, debugged and resolved. Where did the anger come from?

    I was angry because the process I’ve setup has allowed for this issue to occur.

    The IS Editor was working a few days ago. Now it was not working. This was not an issue of my software development skills, this was a challenge for my “organizing a software development process that produces a working software and deploys it to production a few times a day in a team with a large code base and a new R&D challenge that we were working on”.

    This I have found in my experience to be the most difficult problem for good software developers that mediocre and bad software developers do not face. When you know how to fix it, how to implement it and you take on the task then your time and energy is spend on resolving the issue. It might be better for the team as a whole if you spend your energy and resources on a different tasks – like how to avoid a regression in a multi-teams multi-frameworks environment.

    Know what is important and where your efforts would be most valuable. I’ve stepped up and did a lot of software development int he team. I’ve single-handedly implemented a number of frameworks. Not just the architecture, but actual implementation. I once deleted two human years of development and re-implemented the whole module almost from scratch. There is even a saying in the team “Kiril will roll up his sleeves and will implement this”.

    But no.

    There will always be issues in software development and we should think if our task is to resolve this issues, or to make sure this issues never occur in the first place. The later is objectively the more important and difficult task.

     
  • kmitov 5:41 am on October 6, 2020 Permalink |
    Tags: , progressive web application, pwa, , ,   

    The path to a progressive web app – or how we skipped the whole JS Frameworks on the client thing. 

    I recently responded to a question in the Stimulus JS forum that prompted me to write this blog.

    About 6 months ago we decided to skip the whole “JSON response from server and JS framework on the client” stuff and we’ve never felt better. We significantly reduce the code base while delivering more features. We manage to do it with much less resources. Here is an outline of our path

    Progressive Web Application

    We had a few problems.

    1. Specs that were way to fragile and user experience that was way to fragile. Fragile specs that are failing from time to time are not a problem on their own. They are an indicator that the features also do not work in the client browsers from time to time.
    2. Too much and two difficult JS on the client. Although it might see that making a JSON request from the client to the server and generating the HTML from the response might seem as a good idea, it is not always a good idea. If we are to generate the HTML from a JSON, why don’t we ask the server for the HTML and be done with it? Yes, some would say it is faster, but we found out it is basically the same for the server to render ‘{“video_src”: “https://…&#8221;}’ or to render “<video src=’https://…&#8217;></video>’ . The drawback is that in the first scenario you must generate the video tag on the client and this means more work. Like twice the amount of work.

    So we said:

    Let’s deliver the platform to a browser that has NO JS at all, and if it has, we would enhance it here and there.

    How it worked out?

    In retrospective… best decision even. Just know that there is not JS in the browser and try to deliver your features. Specs got a lot faster and better. 1h 40 m compared to 31 minutes. They are not fragile. We have very little JS. The whole platform is much faster. We user one framework less. So, I see no drawbacks.

    First we made the decision not to have a JS framework on the client and to drop this idea as a whole. For our case it was just adding complexity and one more framework. This does not happen overnight, but it could happen. So we decide that there is no JS and the whole platform should work in the case of JS disabled on the browser (this bootstrap navigation menus are a pain in the a…). It should be a progressive web application (PWA).

    After this decisions we did not replace JSON with Ajax calls. We skipped most of them entirely. Some JSON requests could not be skipped, but we changed them as AJAX – for example “generating a username”. When users register they could choose a username, but to make it easier for them we generate one by default. When generating we must make sure it is a username that does not exists in the DB. For this we need to make a request to the server and this is one place we are using Stimulus to submit the username.

    A place that we still use JSON is with Datatables- it is just so convenient. There are also a few progress bars that are making some legacy JSON requests.

    Overall we have Ajax here and there, an a few JSON requests, but that’s it. Like 90-95% of the workflow is working with JS disabled.

    We even took this to the extreme. We are testing it with browsers with JS and browsers without JS. So a delete button on a browser without JS is not opening a confirmation. But with JS enabled the delete opens a confirmation. I was afraid this will introduce a lot of logic in the specs, but I am still surprised it did not. We have one method “js_agnostic_delete” with an if statement that check if JS is enabled and decides what to do.

    My point is that moving JSON to Ajax 1:1 was not for us. It would not pay off as we would basically be doing the same, but in another format. What really payed off and allowed us to reduce the code base with like 30-40%, increase the speed and make the specs not so fragile was to say – “let’s deliver our platform to a JS disabled browser, and if it has JS, than great.”

    To give you even more context this was a set of decisions we made in April 2020 after years of getting tired with JS on client. We are also quite experience with JS as we’ve build a pretty large framework for 3D that is running entirely in browser so it was not like a lack of knowledge and experience with JS on our side that brought us to these decisions. I think whole team grew up enough to finally do without JS.

     
  • kmitov 9:56 am on April 4, 2020 Permalink |
    Tags: , , geminabox, rake,   

    bundle exec vs non bundle exec. 

    This article is part of the series [Everyday code].

    We use bundler to pack parts of the Instruction Steps Framework, especially the parts that should be easy to port to the rails world. We learned something about Bundler so I decide to share it with everybody.

    TL; DR;

    Question is – which of these two should you use:

    # Call bundle exec before rake
    $ bundle exec rake 
    
    # Call just rake
    $ rake

    ‘bundle exec rake’ will look at what is written in your .gemspec/Gemfile while rake will use whatever is in your env.

    Gem.. but in a box
    gem inabox with bundler

    Bundle exec

    For example we use geminabox, a great tool to keep an internal repo of plugins. In this way rails projects could include the Instruction Steps framework directly as a gem. This makes it very easy for rails projects to use the Instruction Steps.

    To put a gem in the repo one must execute:

    $ gem inabox

    You could make this call in three different ways. The difference is subtle, but important.

    Most of the time the env in which your shell is working will be almost the same as the env in which the gem is bundled. Except with the cases when it is not.

    From the shell

    # This will use the env of the shell. Whatever you have in the shell.
    $ gem inabox

    From a rake file

    If you have this rake file

    require 'rails/all'
    
    task :inabox do 
      system("gem inabox")
    end

    then you could call rake in the following ways:

    rake inabox

    # This will call rake in the env defined by the shell in which you are executing
    $ rake inabox

    bundle exec rake inabox

    # This will call rake in the env of the gem
    $ bundle exec rake inabox

    When using the second call bundle will look at the ‘.gemspec’/’Gemfile’ and what is in the gemspec. If non of the gems in the .gemspec adds the ‘inabox’ command to the env then the command is not found and an error occurs like:

    ERROR:  While executing gem ... (Gem::CommandLineError)
        Unknown command inabox

    If ‘gem inabox’ is called directly from the shell it works, but to call gem inabox from a rake job you must have ‘geminabox’ as development dependency of the gem. When calling ‘gem inabox’ from a shell we are not using the development env of the gem, we are using the env of the shell. But once we call ‘bundle exec rake inabox’ and it calls ‘gem inabox’, this second call is in the environment of the gem. So we should have a development_dependency to the ‘geminabox’ gem:

     spec.add_development_dependency 'geminabox'

    Nice. Simple, nice, logical. One just has to know it.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel