Updates from October, 2020 Toggle Comment Threads | Keyboard Shortcuts

  • kmitov 4:25 pm on October 23, 2020 Permalink |
    Tags: , mandarin, utf, wget   

    "wget 1.20 is here!" (never in my life I though I'd say that) 

    (this particle is part of the Everyday Code series)

    There are tools that just work. Low level tolls doing much of the heavy lifting and one thing you know about them is that they work. You never write

    
    $ mv --version source target
    

    For that matter you also never write

    $ cp -- version 1.3 source target
    $ ssh --version 1.2.1 use@machine
    $ curl --version 773.2  url

    Same applies for wget. There are tools that always work. Because they do one thing and they do it well. Not that there are no version. There are. But today was the first time in my career that we had to upgrade a wget version. We moved from version 1.17 to 1.20

    Why the change

    What could have changed that made it important to update from 1.17 to 1.20?

    It was the introduction of support of some mandarin symbols. Mandarin. A language. Users were uploading files names with such symbols and we had to support them.

    The internet. What a beautiful place.

     
  • kmitov 5:38 am on October 19, 2020 Permalink |
    Tags: management,   

    Don’t fix the issue in the software. Improve the process. 

    Yesterday one of the features on our platform did not work. I was in a meeting, demonstrating it over a shared screen and talking with a potential client. I went to the page showing the IS Editor in our buildin3d.com platform and the editor for editing the assembly instructions did not start. A little rush of embarrassment and a few milliseconds later I knew what I had to do. Thanks to my seniority and extended experience in the world of web development I moved my fingers lighting fast on the keyboard and I refreshed the page. The editor started. The demonstration continue.

    I could remember that I stumbled upon this issue a few days earlier and I saw that the IS Editor was not loading when you first visit the page. The meeting continue, I said something like “Sometimes when we are sharing the screen my bandwidth is small so we have to wait”. I suppose the client did not exactly understood what has just happen, but what I know is that the next time they try it on their side it will not work and they will be disappointed.

    Right after the meeting there was a problem I was facing. Should I now open the repo and start debugging or should I wait a day or two for our team to look at this.

    One of the most difficult things running a Software company as a good software developer is the patience to wait for the team of developers to resolve an issue.

    I was close to mad. How difficult could it be? After you commit something just go to the platform and see that it works. We have a lot of automation, a lot of testing and spec that have helped us a lot. We have a clean and I would say quite fast process for releasing a new version of any module to the platform. It takes anywhere from 2 minutes to about 20 minutes depending on what you are releasing. So after you release something just go and see and test and try it and make sure it works. How difficult could it be?

    I was mad. Like naturally and really mad. Not that this demonstration was almost ruined by this issue. I was mad that we’ve spend about 3-4 months working on this editor and it currently does not start. It is not true that the editor itself is not working. It is just not starting. Once it starts it works flawlessly, but a mis-configuration in the way it is started prevents it from even starting.

    It’s like getting to your Ferrary and it does not start because of law battery on your key or something. There is nothing wrong with the Ferrary itself, but your key is not working.

    In this state of anger I opened up the repo. I tracked down the moment it was introduced. And here is the dilemma:

    1. Should I now start debugging it, and resolving it?
    2. Should I just revert the last 11 days of commits and return the platform to a previous state completely removing the great improvements we’ve introduced in this last 11 days?
    3. Should I leave it for the next few days for the team to look at?

    The worst part is that I can fix the issue myself. But that is not my job. My team counts on me to spend more of my time with potential & existing clients, talking and discussing with them. Looking for ways they could integrate us. But in the same time I had an issue where a major feature is not working and will not work for the next few days and in one sleepless night I could resolve it.

    I don’t have this problem with the other departments. When there is an issue with some of the 3D product animations and models or there is an issue with some of the engineering designs I do not feel the urge to go and resolve this issue. I have the patience to rely on the team for this. Basically because I lack the knowledge and the tools to resolve such issues.

    Years ago when we were starting with 3D animations and models I had great interest, but I openly refused to install any software about 3D animations and models on my machine. I knew myself and I knew my team. In school an in university and was trying some 3D models and animations and it felt great. I learned a lot and I had some great time working on such projects. So I knew that if I install some of the software on my machine there will be issue that will come to me, but that was not my role in my organization.

    Same for engineering. I have the complete patience to wait for days for an engineering design task to complete. I never start the SOLIDWORKS myself and go on and “fix the things”. I could. I just don’t want as it will distract me from other important things and I know I can count on the engineers to do it.

    But with software it is always a little difficult. Not that I can not delegate. I can. There are large parts of the code we are running that I have never touched, or changed or anything. So I though – why was this particular issue different? What was my problem? Why was it bothering me? Why was this different from any other issue in software development that is reported, debugged and resolved. Where did the anger come from?

    I was angry because the process I’ve setup has allowed for this issue to occur.

    The IS Editor was working a few days ago. Now it was not working. This was not an issue of my software development skills, this was a challenge for my “organizing a software development process that produces a working software and deploys it to production a few times a day in a team with a large code base and a new R&D challenge that we were working on”.

    This I have found in my experience to be the most difficult problem for good software developers that mediocre and bad software developers do not face. When you know how to fix it, how to implement it and you take on the task then your time and energy is spend on resolving the issue. It might be better for the team as a whole if you spend your energy and resources on a different tasks – like how to avoid a regression in a multi-teams multi-frameworks environment.

    Know what is important and where your efforts would be most valuable. I’ve stepped up and did a lot of software development int he team. I’ve single-handedly implemented a number of frameworks. Not just the architecture, but actual implementation. I once deleted two human years of development and re-implemented the whole module almost from scratch. There is even a saying in the team “Kiril will roll up his sleeves and will implement this”.

    But no.

    There will always be issues in software development and we should think if our task is to resolve this issues, or to make sure this issues never occur in the first place. The later is objectively the more important and difficult task.

     
  • kmitov 5:41 am on October 6, 2020 Permalink |
    Tags: , progressive web application, pwa, , , stimulus   

    The path to a progressive web app – or how we skipped the whole JS Frameworks on the client thing. 

    I recently responded to a question in the Stimulus JS forum that prompted me to write this blog.

    About 6 months ago we decided to skip the whole “JSON response from server and JS framework on the client” stuff and we’ve never felt better. We significantly reduce the code base while delivering more features. We manage to do it with much less resources. Here is an outline of our path

    Progressive Web Application

    We had a few problems.

    1. Specs that were way to fragile and user experience that was way to fragile. Fragile specs that are failing from time to time are not a problem on their own. They are an indicator that the features also do not work in the client browsers from time to time.
    2. Too much and two difficult JS on the client. Although it might see that making a JSON request from the client to the server and generating the HTML from the response might seem as a good idea, it is not always a good idea. If we are to generate the HTML from a JSON, why don’t we ask the server for the HTML and be done with it? Yes, some would say it is faster, but we found out it is basically the same for the server to render ‘{“video_src”: “https://…&#8221;}’ or to render “<video src=’https://…&#8217;></video>’ . The drawback is that in the first scenario you must generate the video tag on the client and this means more work. Like twice the amount of work.

    So we said:

    Let’s deliver the platform to a browser that has NO JS at all, and if it has, we would enhance it here and there.

    How it worked out?

    In retrospective… best decision even. Just know that there is not JS in the browser and try to deliver your features. Specs got a lot faster and better. 1h 40 m compared to 31 minutes. They are not fragile. We have very little JS. The whole platform is much faster. We user one framework less. So, I see no drawbacks.

    First we made the decision not to have a JS framework on the client and to drop this idea as a whole. For our case it was just adding complexity and one more framework. This does not happen overnight, but it could happen. So we decide that there is no JS and the whole platform should work in the case of JS disabled on the browser (this bootstrap navigation menus are a pain in the a…). It should be a progressive web application (PWA).

    After this decisions we did not replace JSON with Ajax calls. We skipped most of them entirely. Some JSON requests could not be skipped, but we changed them as AJAX – for example “generating a username”. When users register they could choose a username, but to make it easier for them we generate one by default. When generating we must make sure it is a username that does not exists in the DB. For this we need to make a request to the server and this is one place we are using Stimulus to submit the username.

    A place that we still use JSON is with Datatables- it is just so convenient. There are also a few progress bars that are making some legacy JSON requests.

    Overall we have Ajax here and there, an a few JSON requests, but that’s it. Like 90-95% of the workflow is working with JS disabled.

    We even took this to the extreme. We are testing it with browsers with JS and browsers without JS. So a delete button on a browser without JS is not opening a confirmation. But with JS enabled the delete opens a confirmation. I was afraid this will introduce a lot of logic in the specs, but I am still surprised it did not. We have one method “js_agnostic_delete” with an if statement that check if JS is enabled and decides what to do.

    My point is that moving JSON to Ajax 1:1 was not for us. It would not pay off as we would basically be doing the same, but in another format. What really payed off and allowed us to reduce the code base with like 30-40%, increase the speed and make the specs not so fragile was to say – “let’s deliver our platform to a JS disabled browser, and if it has JS, than great.”

    To give you even more context this was a set of decisions we made in April 2020 after years of getting tired with JS on client. We are also quite experience with JS as we’ve build a pretty large framework for 3D that is running entirely in browser so it was not like a lack of knowledge and experience with JS on our side that brought us to these decisions. I think whole team grew up enough to finally do without JS.

     
  • kmitov 9:56 am on April 4, 2020 Permalink |
    Tags: bundler, , geminabox, rake,   

    bundle exec vs non bundle exec. 

    This article is part of the series [Everyday code].

    We use bundler to pack parts of the Instruction Steps Framework, especially the parts that should be easy to port to the rails world. We learned something about Bundler so I decide to share it with everybody.

    TL; DR;

    Question is – which of these two should you use:

    # Call bundle exec before rake
    $ bundle exec rake 
    
    # Call just rake
    $ rake

    ‘bundle exec rake’ will look at what is written in your .gemspec/Gemfile while rake will use whatever is in your env.

    Gem.. but in a box
    gem inabox with bundler

    Bundle exec

    For example we use geminabox, a great tool to keep an internal repo of plugins. In this way rails projects could include the Instruction Steps framework directly as a gem. This makes it very easy for rails projects to use the Instruction Steps.

    To put a gem in the repo one must execute:

    $ gem inabox

    You could make this call in three different ways. The difference is subtle, but important.

    Most of the time the env in which your shell is working will be almost the same as the env in which the gem is bundled. Except with the cases when it is not.

    From the shell

    # This will use the env of the shell. Whatever you have in the shell.
    $ gem inabox

    From a rake file

    If you have this rake file

    require 'rails/all'
    
    task :inabox do 
      system("gem inabox")
    end

    then you could call rake in the following ways:

    rake inabox

    # This will call rake in the env defined by the shell in which you are executing
    $ rake inabox

    bundle exec rake inabox

    # This will call rake in the env of the gem
    $ bundle exec rake inabox

    When using the second call bundle will look at the ‘.gemspec’/’Gemfile’ and what is in the gemspec. If non of the gems in the .gemspec adds the ‘inabox’ command to the env then the command is not found and an error occurs like:

    ERROR:  While executing gem ... (Gem::CommandLineError)
        Unknown command inabox

    If ‘gem inabox’ is called directly from the shell it works, but to call gem inabox from a rake job you must have ‘geminabox’ as development dependency of the gem. When calling ‘gem inabox’ from a shell we are not using the development env of the gem, we are using the env of the shell. But once we call ‘bundle exec rake inabox’ and it calls ‘gem inabox’, this second call is in the environment of the gem. So we should have a development_dependency to the ‘geminabox’ gem:

     spec.add_development_dependency 'geminabox'

    Nice. Simple, nice, logical. One just has to know it.

     
  • kmitov 10:45 am on April 3, 2020 Permalink |
    Tags: bash, ,   

    99 versions are not good enough – [Everyday Code] 

    This article is part of the series [Everyday Code]

    You’ve done nothing until you release more than 99 versions of your product. 99 versions are just not good enough.

    TL;DR;

    Today we released version 103 of is-core – the core of the Instruction Steps Framework. We noticed a bug. Generally the build would produce two files:

    is-core-sdk-6.0.0.pre.103.js - that is the current version 
    is-core-sdk-latest.js  - this is pointing to the content of the latest version. 

    Problem was that while the current version was 103, the latest version in is-core-sdk-latest.js was pointing to version is-core-sdk-6.0.0.pre.99.js.

    As a conclusion – You have done nothing until you’ve released at least 100 versions of your software (and probably at least it works through a millennium shift with a leap year, but that’s another story)

    Details

    It’s pretty simple actually. This is what we were doing to get the latest file generated:

    # Creates is-core-sdk-latest.js link to the latest compiled 
     cd ../../release
     rm is-core-sdk-latest.js -f
    -latest=`find is-core-sdk-* -type f | tail -1`
    +latest=`ls -1v is-core-sdk* | tail -1`
     echo "Latest sdk is: $latest"
    

    Notice the find is-core-sdk-* -type f | tail -1 If the files are like

    # Find all the files but they are listed in non natural order of the integer for the version.
    # This code is: BAD
    $ find is-core-sdk-* -type f 
    is-core-sdk-6.0.0.pre.102.js
    is-core-sdk-6.0.0.pre.103.js
    is-core-sdk-6.0.0.pre.97.js
    is-core-sdk-6.0.0.pre.98.js
    
    # If we get just the tail it will give us version 99 which is clearly not right
    $ find is-core-sdk-* -type f | tail -1
    is-core-sdk-6.0.0.pre.99.js
    

    I have done this mistake at least a few times in my career.

    Solution is an option in ls:

    # This code is GOOD
    # This will list all the files
    $ ls -1v is-core-sdk*
    is-core-sdk-6.0.0.pre.97.js
    is-core-sdk-6.0.0.pre.98.js
    is-core-sdk-6.0.0.pre.102.js
    is-core-sdk-6.0.0.pre.103.js
    
    # This will get just the last
    $ ls -1v is-core-sdk* | tail -1
    is-core-sdk-6.0.0.pre.103.js
    

    Moral of the story

    For months I thougth we have a rock solid infrastructure. There was almost no failed build. Delivery to production is in 2 minutes for a pretty complex framework with a lot of projects and modules. And then it “broke” after months of stable work just as we were to release version 100.

    Show me your 100-th version of your product. Then we can talk.

     
  • kmitov 4:40 am on April 2, 2020 Permalink |
    Tags: , , promise   

    Should you care about the settlement of Promise(s) or use Promise.finally() – [Everyday code] 

    This article is part of the series – [Everyday code]

    – This logic should not be in Promise#finally()?

    – Why? We just care that the Promise is settled.

    – No. We care why it is settled.

    TL; DR;

    You might be tempted to put some specs logic in Promise#finally(), but here is why you should not do it.

    It’s like try/catch/finally

    In Instruction Steps Framework we try to load the list of parts in the instructions. There could be no list of parts in the instructions. How should we test this?

    Consider the examples:

    it("shows message 'No parts list provided' when there is no parts list", function() {
          // get the promise that the part list will be loaded, 
          // but we know that it will not be loaded, 
          // because this is how we setup the test. 
          this.promise = ... 
    
          // Using then()
          this.promise.then(() => {
                  expect($("#partsList").text()).toContain("No parts list provided");
                  done();
                });
    
          // Using catch()
          this.promise.catch(() => {
                  expect($("#partsList").text()).toContain("No parts list provided");
                  done();
                });
    
          // Using finally()
          this.promise.finally(() => {
                  expect($("#partsList").text()).toContain("No parts list provided");
                  done();
                });
        })
        

    Would you use then(), catch() or finally() in the spec?

    Using then()

    The purpose of the promise is to load a file. The file is not there. So it is not successfully settled. As this is not successfully settled then() should not be called.

    Using catch()

    The promise is promising us that it will load a file and show something on the screen. It fails. It settles, but fails. It would be best to put the spec in catch().

    Using finally()

    The promise fails as the test is setup like this. We’ve setup the test to have the wrong url. But this is only in this test. What if other clients are waiting on the same promise in a production code. Should they use finally()?

    From the finally documentation – https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/finally

    The finally() method can be useful if you want to do some processing or cleanup once the promise is settled, regardless of its outcome.

    The key here is “regardless of its outcome”. We get a Promise that is promising us to load a file. It fails. We care about the outcome. We care to have a successfully loaded list of parts, and if there is an ‘exceptional case’ we should catch() it and process it. We know exactly why the promise settles. We care.

     
  • kmitov 6:30 pm on March 28, 2020 Permalink |
    Tags: , git   

    Time to be praised as “The Guru” in your office – or how to move a folder from one Git repo to another Git repo and preserve the history 

    Ok. This is the 5th or 6th time that I am doing this and one of my colleagues asked me.

    Could you please write down how you do it so that we could do it without you next time.

    So here it is. 2 minutes read and you can move files from one git repo to another repo and move the commit history for some of the files. Commands below, but first:

    Word of caution (please):

    Such an enormous power under your fingers would make you and object of great attention in your company. Colleagues will sing songs about you. You would be praised as “The Guru”. Small talks before meetings will start with “Have you heard of that dude that can move files to a new git repo and keep all the history”. (believe me, it is like walking on water). Also I can not promise, but I am pretty sure you can get laid with such knowledge. I once got laid for knowing JUnit and Eclipse so… who knows.

    The disadvantage is that once people learn about you and your knowledge, you will be their go to person for questions about git. Git is quite complex and many people are lazy when it comes to reading documentation, so naturally many people would start asking you questions. Mostly stupid questions, of course. Can you handle the load?

    Task

    Moving folder integration/processors from old_repo repo to new_repo

    Commands

    # Enter new repo
    $ cd new_repo/
    
    # Make sure you are up to date
    $ git pull
    Already up to date.
    
    # Check remotes. Just to see what you've got
    $ git remote -v
    origin  git@host:new_repo (fetch)
    origin  git@host:new_repo (push)
    
    # You are in the new repo. Add the remote to the old_repo
    $ git remote add old_repo git@host:old_repo
    
    # Make sure the remote for old_repo is added
    $ git remote -v
    old_repo  git@host:old_repo (fetch)
    old_repo  git@host:old_repo (push)
    origin  git@host:new_repo (fetch)
    origin  git@host:new_repo (push)
    
    # Fetch from the branch of old_repo as it is now an origin
    $ git fetch old_repo
    ...
    From host:old_repo
     * [new branch]      dev                     -> old_repo/dev
    
    # Checkout the branch from old_repo
    $ git checkout --track old_repo/dev
    
    # Remove all the paths that you don't need. Keep the paths that you do need. Bunch of magic. Better read the documentation about it.  
    $ git filter-branch --force --index-filter   "git rm --cached -r --ignore-unmatch PATH_1 PATH_2 EVERYPATH_THAT_DOES_NOT_INCLUDE_INTEGRATION/PROCESSORS"   --prune-empty --tag-name-filter cat -- --all
    
    # Return to your master branch
    $ git checkout master
    
    # Merge the already filtered branch to your master.
    $ git merge dev  --allow-unrelated-histories
    
    # Think not twice, but three times. After this there is not turning back. It's the Fame or the Shame!!!
    $ git push -f

     
  • kmitov 6:26 am on March 23, 2020 Permalink |
    Tags: , prettier   

    Simple warning goes a long way 

    TL;DR;

    Just warn people with a simple message when you are deprecating a behavior in your tool and you are introducing a breaking change. It’s not that difficult

    Story

    Yesterday I kind of wake up to a nasty surprise in our local Continuous Integration.

    Continuous Integration on Jenkins failing miserably

    The problem

    Prettier (https://github.com/prettier/prettier/) have released a breaking change from version 1.19.1 to 2.0.1. This breaks most of our projects.

    The bigger problem

    Prettier is one of the nicest tools we’ve used. It allows us to keep the code formatted. It is also integrated in our CI and if a file is not properly formatted when committed the build fails.

    Several months ago it took us 17.5 hours to integrate Prettier to all developers and all projects and since then we had no problems.

    Then the update happened.

    I have nothing against breaking changes in an API or a project. I welcome them especially in non-critical tools as formatting the code. People learn. People need to learn and building and maintaining an API takes practice, consideration and a lot of experience. I have personally broken some API(s) that I’ve developed in the past. But what I think about breaking changes is that you should properly communicate this with you clients. We are using prettier in a very simple way. Here is the command:

    npx prettier app/**/*.js test/dummy/spec/javascripts/**/*_spec.js vendor/assets/javascripts/gcc/externs/*.externs.js --write --config prettier_conf.json"  

    That’s it. Turns out that as of version 2.0.1 prettier have broken this behavior and now if the project has no files for any of the globs it will return an error.

    For version 2.0.1

    $ mkdir pretti
    $ cd pretti/
    /pretti$ touch some.js
    /pretti$ npx prettier --version
    2.0.1
    /pretti$ ls
    some.js
    /pretti$ npx prettier app/**/*.js *.js
    [error] No files matching the pattern were found: "app/**/*.js".
    /pretti$ echo $?
    127

    For version 1.19.1

    $ mkdir pretti
    $ cd pretti/
    /pretti$ touch some.js
    /pretti$ npx prettier --version
    1.19.1
    /pretti$ ls
    some.js
    /pretti$ npx prettier app/**/*.js *.js
    /pretti$ echo $?
    0

    See what they did there. Previously if a pattern was not matched prettier returned 0 and now it returns 127, which for a Linux is just error.

    Conclusion and solution

    “Professionals have standards”

    When designing tools have Interoperability in mind. Do breaking changes, but release a version that warns people for the deprecation and for the breaking change they are about to experience. Like a simple print to console in version 1.99 (the one before the breaking change) that says “hey, this is deprecated and will be removed in 2.0. Please read here ‘link’ so that your clients don’t break, you don’t open issues on our github and you don’t write blog posts. Stay safe.”


     
  • kmitov 10:02 am on December 31, 2018 Permalink |
    Tags: , Software Planning, Trello   

    How to plan with Trello? Part 1 – backlog and sprint board 

    I recently shared this with a friend that is constantly getting lost with Trello and how exactly to structure his software project plan. I shared my experience with him and he kind of liked it so here is my story and the few rules that are keeping me sane for the past 2 years of following them.

    Main issues with planing a software project with Trello is to decide

    • are different features in different board,
    • why do you need lables. Are different features marked with labels
    • are different features in lists?
    • how do you set the priority for a task. Do you have a list for priority, or label for priority.

    Because of these questions for the last 4-5-6 years I’ve started and stopped using Trello many times.

    These are all difficult questions. Here are my simple solutions.

    TL;DR;

    Create two boards. Backlog and SprintXX. In the SprintXX you have three lists. XX is the number of the sprint. “SPXX Planned“, “Ongoing“, “Done SPXX December 01- December 15“. When the sprint that is two – three weeks finishes you archive “Done SPXX December 01- December 15” and create a new “Done SPXX+1 December 16-December 31” list. Then you rename list “SPXX Planned” to “SPXX+1 Planned” where XX is the number of the Sprint.

    This keeps the Trello clear.

    Create two boards

    Board one is the Sprint board
    Board two is the Backlog board

    If you are currently not working on the task and there is little to no chance to work on it in the next 3-4 weeks that it is in the Backlog. This means it will be handled later.

    Sprint Board

    The Sprint board has the name of the current Sprint. I like sprints that are 2-3 weeks long. It has three lists

    SPXX Planed

    The list has all the tasks that are planned for the current sprint or probably the next one. These are tasks that you are genuinely planning to do something about.

    Ongoing

    These are all the tasks that we are currently working on. If we have even a single line of code for this task than we are working on it.

    Done SPXX December 01 – December 15

    These are all the tasks completed in the spring XX. Note that the list has the name “Done SPXX December 01 – December 15”. This is the full name of the sprint.

    At the end of the sprint

    When the spring ends you archive “Done SPXX December 01 -December 15”. You do not archive the tasks. You archive the whole list. This gives you a chance to get back to the list at the regular reviews that you are having with the team and actually review what has happened in this sprint.



     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel