Better a day to lose

Better a day to lose

In investing there is the concept of "Bad now - good then." An investor regularly pinches 10, 20% or even 30% of his earnings for the future. This money he invests in bonds, stocks, OFZ, ETF - who is in that much. Now, at the moment, the investor takes away his earnings, deprives himself of any benefits, so that in the future, on the horizon of 10-20 years, to benefit from the investments. Profit in the future will cover today's hardships. Approximately the same strategy is practiced by Alexey Okhrimenko ( obenjiro ), but with reference to develop it is better to lose a day, and then fly in 5 minutes.


On Frontend Conf 2018 Alexei told how, having lost a lot of time now, ultimately save it later. This report is not about the feeling of boredom and not about how to deal with monotonous and routine tasks, but about how to spend time to the maximum - how much is there, spend everything, and see what happens. In deciphering the report, the experience of writing tools for debugging, testing, optimization, scaling, and validation for different projects. Bonus Alex will talk about a number of existing tools and the benefits they bring. Let's find out if you need to waste time on this at all.

About the speaker: Alexey Okhrimenko is a developer at Avito Frontend Architecture, where he slightly improves the lives of millions of people. Runs the podcast "5 min Angular" , and organizes Angular Meetup along with guys from Tinkoff, and performs with a huge number of different and controversial reports .

Where can I lose time?

Zero step is to buy a Mac/iMac and immediately start losing time, or put Linux on a laptop and lose all the working time in it, changing configs. I also highly recommend starting with Gentoo.

There are 8 points we can spend time on.

  • Terminal.
  • Design.
  • Create a project.
  • Code Generation.
  • Writing code.
  • Refactoring.
  • Testing.
  • Debugging.

Let's start a profound loss by starting in order.


Where on the terminal can we spend our time to lose everything? Organize your workspace - create the folders My Work, My Hobby Projects and put everything in them. Set yourself Homebrew to install additional software that will still be mentioned.

Put iTerm2 , and drop the default terminal on Mac.

Put add-ons, like oh-my-zsh , that come with a set of very cool plugins.

Put the tmux - terminal multiplexer. This is a terminal program that allows you to open several windows in one window and additionally support the session. Usually, if you shut down the terminal, everything breaks down and ends, and tmux will continue to work, even if you have turned everything off. If I have never worked with tmux, I recommend a review from DBMS Studio .

Write aliases . Every time you write something more than once in the terminal - write yourself an alias, it will be useful. Twice - already a lot, be sure to be the third, sixth and tenth.

Put additional tools, for example, jmespath or abbreviated - jp. You can install it via brew and make interesting query queries in JSON files.

  brew tap jmespath/jmespath
 brew instal jp  

For example, you have packed JSON files, you can go through everything and find out what versions of React are in your applications and projects.

Automate your work - do not open the same files many times!

Now let's talk where to spend it all. Anything higher is a small loss of time, you can lose more in Shell Scripts.

Shell Script

It is a programming language, mainly for bash, with its own syntax.

 for dir in ‘ls $ YOUR_TOP_LEVEL_FOLDER’;
 for subdir in ’Is $ YOUR_TOP_LEVEL_FOLDER/$ dir’

Full language - some people create games and web servers, which I do not advise. I recommend all the work that you have spent time to spend again and write it all fully in the file. What for? All familiar developers who have been working in the industry for a long time simply create their own GitHub repository for configurations, and place there a configuration for their TMUX - terminal multiplexer, Shell Scripts for initialization.

Why spend a lot of time on what has already been done once? Then, that you will transfer to another job, you will change the computer at work, the motherboard will burn and you will spend a day or two or three again to set up the environment . When you have such a repository, it takes 10 minutes to set up and install.


Usually everyone is very excited at once: “Yes, design! UML diagrams! ”, But when I say the word UML out loud, many familiar programmers notice:

- In 2018 ?! What's the matter? UML is a terrible relic of the past. Why do you dig the corpse? Drop the shovel!

But UML is very useful. For example, at a Scrum meeting, a Java developer listens to Python programmers discuss the backend feature architecture. He rubs his head sadly and realizes that he does not understand anything, but simply loses an hour of his time. A Java developer cannot interact with Python programmers — he will not tell you how to write code, use classes, mixins, or anything else. He just does not participate in the case. Our company has JavaScript, Python and Lua. At the moment 2/3 of people are bored: first, some 2/3, then others. UML solves this problem.

UML is a versatile abstract visual language for system design that allows you to ignore language features.

I will give two of my favorite examples.

Sequence Diagrams

These diagrams help to show the history of interaction in time.

On the vertical Y axis, the time dependence goes down: first we get a request for authentication, then we give an answer, and then we put something in the logs. On the horizontal X axis, direct interaction takes place between the characters - participants of some event.

Personally, I occasionally use Sequence Diagrams to describe process authentication in applications. At the same time, I, the JS developer, find a common language with the Python, Lua and Java backend. We all understand each other and know how the code will work as a result, and we don’t bother with the concrete implementation of this or that language.

Class Diagram

These diagrams are also very useful. There are classes in JavaScript, what is the point of diagrams? But there is TypeScript, and with it you can get interfaces, abstract classes - a complete representation of the final architecture.

Minute design saves a week of coding.


I use the PlantUML Java library. With it, you can use some kind of complicated dsl, in which you specify, for example, that List is inherited from AbstractList, Collection - from AbstractCollection, as well as interaction, aggregation, properties, interfaces and everything else.

  @ startuml
 abstract class AbstractList
 abstract AbstractCollection
 interface list
 interface collection
 List & lt; | - AbstractList
 Collection & lt; | - AbstractCollection
 Collection & lt; | - List
 AbstractCollection & lt; | - AbstractList
 AbstractList & lt; | - ArrayList
 class ArrayList {
 Object [] elementData
 size ()
 enum TimeUnit {

As a result, I will get the final diagram.

It all works well, there are plugins for Visual Studio Code.
There is another interesting application.


We draw a simple diagram: there is a base class from which the test class is inherited.

Next, use StarUML . It is not too expensive and can export to Java. There is no tool that would export UML diagrams to TypeScript code, but we can export using StarUML to Java code.


Then use the JSweet library, which allows you to convert Java code to TypeScript or JavaScript code.

Java code ...

  import java.until. *;/**
 public class BaseClass {/**
 * Default constructor
 public BaseClass () {
 * some attribute
 protected String baseAttribute;

... with JSweet we convert to TypeScript code:

 /* Generated from Java with JSweet 2.0.0 - & lt; a href = "" & gt;<  ;/a & gt;  *//**
 * Default constructor
 * @class
 class BaseClass {
 public constructor) {
 this.baseAttribute = null;
 * some attribute
 baseAttribute: string;
 BaseClass ["_ class"] = "BaseClass";  

There is an additional parameter _class - this is a feature of Java, this can be removed.As a result, we got a ready typeplate-code boilerplate from diagrams - a base on which to work. And this base is designed and clear to all.

It’s definitely worth spending time designing UML.

Create a project

Who configures a webpack every time and creates a webpack-config in a new project - guys, what's up with you ?! All is well? Do you need help? If you are held hostage - write the coordinates in the comments, we will send a rescue helicopter.

The easiest way to avoid this and not to configure the same thing every time is to create a shared repository on GitHub locally or raise GitLub CI, clone this repository, go into it and delete the git folder.

  git clone something
 cd something
 rm -rf .git  

Now we have a reference project from which we clone. With this approach, you can get very cheap Bootstrapping .

Yeoman - deprecated. Slush - deprecated

That Yeoman deprecated is too self-confident. It's not deprecated, it's just being used less and less, like Slush . These are two identical tools, just with a different base: Yeoman is Grunt and code generation. Slush is Galp and code generation .

Despite the fact that the tools are interesting, now others are more often used.

Angular CLI, Create React App, Vue CLI

Who works with Angular - use Angular CLI. Create React App - who works with React. Vue CLI - lovers Vue.JS.

Most have already moved to these tools. One of the main arguments why it is worth working with the CLI is uniformity . If you forget your project using CLI, then you are sure that the person who comes after you will know the structure of the project: commands, features, that you can run end-to-end and unit tests. These tools are very good.

Is it worth spending time on bootstrapping projects using CLI, not Yeoman? Yes, no doubt.

Code Generation

We have a certain code base. Usually, when we start a project, we create Routing first, and then Redux - how can it be without it? Each framework has a specialized code generation tool. Angular is CLI Schematics . In Vue CLI - a separate section for generating plugins Vue CLI plugins : you can in the plugins section generate some code for our projects.

Redux CLI

I want to dwell on React and Redux CLI, because from my practice React-programmers are the least involved in code generation and it is painful to watch. Every time people create the same files and complain that it’s hard to work with Redux, you need to create a lot of things. So there are already tools!

This is the Redux CLI , which will create a dock-file for you, which will include effects, reducers, corresponding actions, “stupid” components, and “smart” components. Additionally, you can generate your components or code base using the Redux CLI. Putting the Redux CLI is simple; you can either create a project with its help or initialize it in a ready-made, for example, created using the Create React App.

  npm i redux-cli -g
 blueprint new & lt; project name & gt;
 blueprint init

 blueprint g dumb SimpleButton  

There is another universal tool that does not depend on the framework - Plop .


I learned about it recently. Plop does the same thing as the previous one: by initializing this tool, you can create all the necessary basic components. You specify what components your application consists of and just generate them. So you do not spend time on creating the main code base.Having a user story and specification, you can generate basic functionality, tests, basic styles - you will save a lot of work .

All tools will have to be customized - I periodically configure it under React Blueprint, I make my component library, but this time pays off .

Writing Code

There will be banalism.

Code snippets

Code snippets allow you to write a small fragment, a key code word, and get a ready piece of functionality. For example, you can create an Angular component by writing @Component .

For React and Vue there are the same code snippets.

There are problems with banal code snippets. The more professional the developer, the less he uses code snippets - simply because he already knows how everything is written and he is too lazy to create them. He already remembered how to spell this component.

Let me remind you, our goal is to spend time without doing anything useful. Therefore we sit down and write code snippets. Here you can spend an infinitely large amount of time, and the goal will be achieved.

I personally found snippets useful when I worked with i-bem.js :

  modules.define ("button & lt; i & gt;", & lt;/i & gt; ["i-bem-dom"], function (provide, bemDom) {
 provide (
 bemDom.declBlock (,
 {/* instance methods */
 {/* static methods */

There is nothing complicated in this declaration, but the syntax is not similar to Angular, React, or Vue, and it is very difficult to remember it the first hundred times. One hundred and first remembered. I was tormented, spent a lot of time, and then began to massively create these components simply due to the use of code snippets.

For those who work with WebStorm, this is not very useful, simply because it does not have such a large ecosystem of plug-ins and, basically, everything is included initially - this is full IDE .

VScode extensions/VIM extensions

The situation is different with the editors Visual Studio Code and VIM . To get some benefit from them, you need to install plugins. To find all the good plugins and put them you can spend a few days - insanely a lot of plug-ins!

I killed on their search an insane amount of time, which I recommend to you. You can sit for hours, search, look at them, at beautiful animated gifs - a miracle! Write in the comments if you want me to share all that I have.

There are tools that automatically highlight the complexity of the code, which tests pass, which do not, when right in the code you can see the reason for the failure, which code passed or not, autocompilers, autoprefixers - all this in plug-ins.

Here you can spend a lot of time and we will achieve our goal. Of course, plug-ins do not relate a bit to writing code, but imagine that they help us write it.


This is my favorite topic! And so much so that I have a separate report on refactoring: “Refactoring - Where? Where? When? From where Why? Why and How? ” I tell him in detail what it is and how to work with it.

Immediately I warn you, refactoring is not what you usually imagine . Usually, this means: "I improved the code base and added a new feature." This is not a refactoring. If you now have cognitive dissonance, look at the report and it will pass.

AngularJS Grunt - & gt; webpack

About refactoring I want to tell one instructive story. We had a very old AngularJS project, which was built with the help of Grunt with a banal concatenation. The project was written during the first and second versions of Angular.Accordingly, everything was very simple there: the files were concatenated, then uglify, and that’s all. At some point, we realized that we had to move to a Webpack. We have a huge legacy codebase - how to translate it into a webpack?

We made some interesting visits. First, they turned to the library.

This library allows you to convert code from ES5 to ES6, and very well. It takes the old code base and turns it into a new one: inserts imports, uses new lines, classes, sets let and const correctly - it does everything for you. In this regard, a very good library.

We put this plugin, drove the file code through . After that, they simply took Mustache templates and the code that looked different under the new Angular 1.6 and 1.5 with the component approach. With the help of the regulars, we pulled out the necessary pieces, with the help of Mustache, we rendered our template differently and looped through all our files.

  var object_to_render = {key: "value", ...};
 fs.readFile (path_to_mustache_template, function (err, data) {
 if (err) throw err;
 var output = Mustache.render (data.toString (), object_to_render);
 fs.saveFileSync (path_to_mustache_template);

As a result, we converted a huge amount of legacy code into a modern format and quickly connected the Webpack. For me personally, the story is very instructive.


This is a tool that allows you to format the code base and search for it not by ordinary search, but semantically . We connect our library, file system, read the file and want to find something. Below is an abstract example, we are now working with Angular.

  var jsfmt = require (‘jsfmt’);
 var fs = require (’fs’);

 var js = fs.readFileSync (’component.js’); (js, "R.Component (a, {dependencies: z})"). map ((matches, wildcards) = & gt; {
 console.log (wildcards.z);

This is what our query looks like:

  & lt; b & gt; R.Component & lt;/b & gt;  (a, {dependencies: z})  

R/Component is the own R library and some Component .

This part looks very strange:

  R.Component & lt; b & gt;  (& lt; u & gt; a & lt;/b & gt; & lt;/u & gt ;, {dependencies: & lt; b & gt; & lt; u & gt; z & lt;/b & gt; & lt;/u & gt;})  

This does not seem like valid JavaScript - and it is. We insert small letters like placeholders, and we say Jsfmt that we are not interested in what is there: an object or an array, a string or a boolean value, null or undefined - it doesn't matter. It is important for us to get links to a and z , after which, when we go through the entire code base, we will find all the options for z . For example, we can find all the dependencies of this component. This allows you to do complex refactorings.

Using the tool, I managed to refactor a huge code base with a semantic approach using trees and analysis.

I didn’t have to write complex queries, complex regulars or a parse syntax tree — I just formed a query and indicated what to change.

Two additional tools

There is a simple thing in refactoring that I have to say. If you want to refactor something, then in Visual Studio Code, select the code, and hints and options for refactoring will appear. For example, extract-method, inline-method.

WebStorm has a context menu that can be accessed using a key combination, depending on the configuration, and refactor the code base.

In general, WebStorm has more commands, it is more advanced now than Visual Studio Code.


Now the most interesting and inspiring :)

Selenium IDE

First a little story. Somehow testers came to me and they said:

- We write end-to-end tests, we want to automate them, and we have Selenium IDE.

Selenium IDE is just a plugin for Firefox that records your actions in a browser. He remembers all your steps - clicks, scrolls, entries, transitions, and these steps you can lose again. But that is not all. You can export what you wrote down, for example, in Java or in Python, and run automated end-to-end tests using Selenium IDE.

It sounds great, but in reality the Selenium IDE doesn’t work perfectly well on its own, and we also had ExtJs at that time.


If you had ExtJs - I sympathize and embrace. Selenium IDE always records the most unique selector. On our elements is id. But ExtJs for each element generates a random id, I don’t know why. This problem with ExtJs comes from version zero.

  ExtJS = & lt; div id = "random_6452"/& gt;  

As a result, our testers opened the application in the morning, recorded everything, then, without reloading the page , periodically drove it away, trying to figure out if the backend had broken, for example. They updated the backend, but did not touch the front end. The main thing was not to press refresh, because after that a new id was generated.

Immediately testers came one brilliant idea. Selenium IDE can export its records to HTML format - we can work with HTML, we have template engines - let's try it!

Google Chrome Extension

Quickly created a Google Chrome extension and immediately found the smart elementFromPoint method.

  document.elementFromPoint (x, y);  

Tritely recording the mouse movement on the window and then calling elementFromPoint when the click was triggered, I found the coordinates of the element I clicked. Next, it was necessary to create a certain selector, to somehow select this particular element. Id can't be used - what to do?

The idea came - additionally hang on the components of a special test-id . An abstract test id was created for the component, which was needed only for tests.

  data-test-id = "ComponentTestId"  

It was generated only in the test environment, and using it we made a select for the data attribute. But this is not always enough. For example, we have a component, but inside there is still div , span , icon , an icon in the i-tag. What to do with it?

For this “tail”, we additionally generated XPath :

  function createXPathFromElement (elm) {
 var allNodes = document.getElementsByTagName (’*’);
 for (var segs = []; elm & amp; elm.nodeType = 1; elm = elm.parentNode)
 if (elm.hasAttribute (’class’)) {
 segs.unshift (elm.localName.toLowerCase () +
 ’[A) class =“ ’+ elm.getAttribute (‘ class ’) +‘ ”]);
 } Else {
 for (i = 1, sib = elm.previousSibling; sib; sib = sib.previousSibling) {
 if (sib.localName = elm.localName) i ++;  };
 segs.unshift (elm.localName.toLowerCase () + ’[’ + i + ’]’);
 return segs.length?  ’/’ + Segs.join (’/’): null;

As a result, a unique XPath selector is formed, which consists, in a successful case, of a data-attribute of a selector by a data-attribute with the name of the component:

  & lt; b & gt; & lt; u & gt; .//* [@ data-test-id = 'ComponentName'] & lt;/b & gt; & lt;/u & gt;/ul/li/ div/p [2]  

If inside the component there was still some kind of complex structure, then everything was allocated with additional strict XPath - without id.We avoided id because we worked with ExtJs.

This XPath could be easily tested. We wrote everything down, exported it back to an HTML document, loaded it back into the Selenium IDE and drove it away.

We created the Chrome Extension, which simply generated the Selenium IDE recording format, but in its own way, not like the Selenium IDE does. In the same place, we added many clever spinner scrolling checks, successful application downloads - we added additional nuances that Selenium IDE does not take into account. Thanks to this, we have fully automated end-to-end tests.

The only thing left for the testers to do was open any version of the application, click, download to Selenium IDE, check, save it as a Python code, enjoy the salary and bonus increase and say “thank you” to me.

For unit tests, I can’t please people from React and VueJS communities - sorry! I do not know about similar tools for React and VueJS - perhaps they are. I will only please those with Angular.


There is a SimonTest plug-in in Visual Studio Code for Angular.

If you need to write a unit test for your component, point to this component and tell it to generate the skeleton of your unit test. Then all the necessary framework will appear:

  • all necessary dependencies will be created, they will be locked and all necessary properties for correct testing will be indicated;
  • tests will be created for methods with basic validation and basic functionality.

It remains only to add a check of business logic and some logic of your component, or the business logic of your application to this component. Just by pressing the generation, we get a full-fledged framework for unit tests.

Spend time testing - it will be in vain.


The first 80% of development time is not as bad as the last 80% in debugging.

Where can we spend time on debugging when we still have no exceptions and problems? What can you do and spend a lot of time at this stage?

Chrome DevTools

Here we can check the performance, get some data and debug our code to understand how it works in reality, especially if the code base is old.

What alternatives to the Debugger do you know? Full-stack or backend programmers are simply obliged to know the alternative that is used for debugging. There is a Profiler, but it is more for determining the performance, there are Dumps, but we are interested in working at runtime, but there is a monitoring tool that allows us to understand what is happening and what is going wrong.


For starters, the tracer concept: all events are recorded in real time. In runtime, the entire log of all is recorded: a click occurred that caused the event, after the event was called - promise, then setTimeout, after setTimeout another promise. All these events catches the tracer.

Spy-js vs TraceGL

Initially, there were two main competitors: Spy-js and TraceGL . These competing tracers could show in real time what was going on with the program. The difference from Debugger is the following: let's say your code base consists of a thousand lines - how many times does it need to be passed? Long, hard, dreary - and you can not always catch bugs.

A debugger problem, for example, on a multithreaded backend — if you start blocking, then some things just don’t find. If you have a multithreaded program that does deadlocks - you will not catch the deadlock using the debugger, because the events will come in a different sequence.

In JS, sometimes the same thing happens, so the tracers help. They allow you to see the real picture in real time. You just analyze deadlocks and that's it.

Spy-js bought WebStorm, cleared the repository, new versions are no longer laid out. Changes appeared only in spy-js. TraceGL bought Mozilla. The developer had huge plans, he promised that a super-tracer would appear in Firefox. TraceGL was cool, but then, apparently, the authorities decided to implement new features incrementally and they are still being implemented. TraceGL tracks in Chrome are not visible and, most likely, will never happen.

Rejoice, owners of WebStorm, because it is there that there is Spy-js. It is very easy to configure: you create some kind of Spy-js configuration, start your project, then start to catch all the necessary events in real time and can analyze them. Additionally, WebStorm provides some more interesting features: it is compatible with TypeScript, CoffeeScript, additionally shows the latest execution data. If you run the code with Spy-js, after the program has finished, you can see which variables and which values ​​were in your arguments. In this regard, the tool is chic.

In any new project I came to, I turned on the tracer and after 5 minutes I knew how the project works: architecture, device, elements of interaction, what events are taking place. A few minutes - and I am an expert in any code base, simply because I saw and understood what was happening in real time.

What do we have now in our arsenal?

  • We spent a lot of time setting up and writing scripts for the terminal .
  • For design we spent time on the pictures, which as a result we still turned into code, but not into full-fledged, but only in the framework of business logic.
  • Create a project . We scraped the project with tools.
  • Code Generation - generated a certain amount of code, basic components, that is, did a rough job;
  • We didn’t write the code , but we prepared everything: put all the plugins and snippets.
  • At refactoring , you can kill an infinite amount of time, and we did it, but using clever tools is a bit different from what I wanted.
  • At testing we spent a lot of time - we created our own test recorder.
  • You can spend as much time on debugging as we want! You can debug endlessly.

Why did we do all this? I will quote a professional from the cartoon “Wings, legs and tails”:

- All the same, you do not take off! Remember: it is better to lose a day, then fly in 5 minutes! Forward!

Alexey’s report is one of the best at the 2018 conference. Within a couple of weeks, Frontend Conf will be held. in the composition of RIT ++. Liked? Join now! Come to Frontend Conf RIT ++ in May, subscribe to the newsletter : new materials , announcements, video access and more cool articles.

Poll. Is it worth spending time on ..

Source text: Better a day to lose