Recap.Dev Now Traces Sails.js Applications

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

We're happy to announce that one of the popular Node.js frameworks, Sails.js, gets the love it deserves from us. And it comes in the form of a hook that will automatically trace your application.

It means tracing your application with Recap.Dev requires no code changes at all. Just install the hook and point it to your Recap.Dev server. Why would you do that? Here's an article giving you five good reasons. Basically, it helps you understand your application and how much time various operations took while serving a particular request.

Click here to learn how to get started with Recap.Dev

Recap.Dev Server 0.6.2 Available Now

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

We're happy to announce that Recap.Dev server 0.6.2 was released today.

Notable changes include:

  • Performance and stability improvements. We fixed a couple of bugs that will make your Recap.Dev server faster and more reliable.

  • Bugfixes. Fixed a notable bug where AWS Lambda timeout errors won't be reported to your Slack and a couple of smaller ones.

  • Fully anonymized usage statistics tracking. To understand our users better, we added fully anonymized and GDPR-compliant usage tracking. It doesn't collect any personal, private or sensitive information. You can still opt-out and completely disable it for your Recap.Dev server by setting the DISABLE_USAGE_ANALYTICS environment variable to 'true' on your Recap.Dev server installation.

Click here to learn how to upgrade your Recap.Dev server

5 Reasons to Use Recap.Dev

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

Whatever your application stack is, you will benefit from better observability of your system. Setting up tracing is probably one of the easiest ways of doing just that. At Recap.Dev, we strive to provide an easy way to improve application stability while providing a nice experience to our actual users - engineers of all sorts.

Let me walk you through a couple of main benefits I found while using Recap.Dev. Yes, we're using it ourselves.

Tracing System as an Application Operations Log

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

Imagine you have a big backend system. It performs a lot of operations each day. Let's face the inevitable — some of them will result in an error. Some of them you can ignore, some you'll learn about too late, and some might result in a loss of important data or system downtime.

Let's talk about different kinds of operations and errors and how a tracing system helps developers with all of them.

Recap.Dev Now Supports Tracing Vercel Functions and Netlify Functions

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

Vercel and Netlify functions are some of the most popular solutions when it comes to the Jamstack backend.

In a survey conducted by O'Reilly in June 2019, 30% of the respondents named harder debugging, and 25% named observability to be the biggest challenges in adopting serverless technologies. In the same report, about 17% mentioned a lack of tools. It's not always easy to debug serverless applications.

If you're using Vercel or Netlify functions, it's even more challenging because there are even fewer tools available. No tracing tool supports these serverless platforms out of the box.

That's why we just released a new 1.12.0 version of the Recap.Dev JavaScript client, which exports two new wrapper functions: wrapNetlifyHandler and wrapVercelHandler. This allows tracing the Netlify Functions and the Vercel Functions serverless providers, respectively.

Function Level Tracing with TypeScript Transformer

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

We just released a new TypeScript transformer which will automatically wrap your functions and classes with the function-level tracing.

This provides a better alternative for the projects that are compiled with the TypeScript compiler or the ts-loader for Webpack, eliminating the need for Babel.

Check usage instructions here

Adding to your application won't take more than 5 minutes.

Click here to learn how to get started with

Improved NestJS Tracing with Recap.Dev

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

We just released a new 1.9.0 version of the JavaScript client which exports a new wrapNestJsModule function. Which when used like this:

import { NestFactory } from '@nestjs/core';
import { wrapNestJsModule } from '';
const app = await NestFactory.create(wrapNestJsModule(AppModule));

will record calls and their timings of the controllers and injectables in the NestJS module.

A timeline with NestJS module wrapped

ARM-backed Servers - Better Performance for Less Money

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

This year Apple changed the game of the desktop CPUs with their announcement of the Apple Silicon. A similar thing happened a year ago in the world of cloud computing. AWS released a new type of instance backed by their custom-built ARM processors called AWS Graviton2. They're supposed to have up to 40% better price-performance than their x86 counterparts. Another huge recent update is the introduction of Graviton2-based Amazon RDS instances. Let's run a couple of benchmarks and load-test a real-world backend application to see how good ARM servers are and how easy they are to use.


I compared a t4g.small (ARM) instance to a t3.small (x86) EC2 instance. Currently, the on-demand hourly cost in the us-east-1 region for t3.small (x86) is $0.0208 and t4g.small (ARM) is $0.0168. The ARM-backed instance is already around 20% cheaper.

First, I ran a load-test on a fresh setup with wrk.

It's a docker-compose template running 4 processes. A handler process puts every request into a RabbitMQ. A separate background process inserts traces in batches of 1000 into a PostgreSQL database.

A typical setup consists of 4 processes

I ran wrk on a t3.2xlarge instance in the same region using the following command:

wrk -t24 -c1000 -d300s -s ./post.lua <hostname>

It bombarded the target instance with trace requests for 5 minutes using 24 threads and 1000 HTTP connections.

This is the result I got for t4g.small (ARM) instance:

24 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 473.53ms 53.06ms 1.96s 81.33%
Req/Sec 115.83 96.65 494.00 71.32%
620751 requests in 5.00m, 85.84MB read
Socket errors: connect 0, read 0, write 0, timeout 225
Requests/sec: 2068.48
Transfer/sec: 292.90KB

For the t3.small (x86) instance:

24 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 600.28ms 70.23ms 2.00s 72.53%
Req/Sec 92.77 82.25 404.00 70.26%
488218 requests in 5.00m, 67.51MB read
Socket errors: connect 0, read 0, write 0, timeout 348
Requests/sec: 1626.87
Transfer/sec: 230.37KB

ARM-backed instance served 27% more requests per second 26% faster (on average).

ARM-backed instance served 27% more requests per second

Then I ran a couple of benchmarks from the Phoronix Test Suite.

pts/compress-7zip-1.7.1 gave 6833 MIPS on t4g.small (ARM) versus 5029 MIPS on t3.small (x86). A 35% higher result on an ARM processor.

ARM-backed instance got a 35% better result in pts/compress-7zip benchmark

ARM-backed server finished the pts/c-ray benchmark more than 2 times faster on average. 958 seconds for x86 versus just 458 for ARM.

The ARM-backed instance was more than 2 times faster in pts/c-ray benchmark

I also ran a bunch of RAM speed tests from pts/ramspeed that measure memory throughput on different operations.

Benchmark Typet4g.small (ARM)t3.small (x86)
Add/Integer50000 MB/s13008 MB/s
Copy/Integer58650 MB/s11772 MB/s
Scale/Integer31753 MB/s11989 MB/s
Triad/Integer36869 MB/s12818 MB/s
Average/Integer44280 MB/s12314 MB/s
Add/Floating Point49775 MB/s12750 MB/s
Copy/Floating Point58749 MB/s11694 MB/s
Scale/Floating Point58721 MB/s11765 MB/s
Triad/Floating Point49667 MB/s12809 MB/s
Average/Floating Point54716 MB/s12260 MB/s

RAM on the Graviton2 instance was 3 to 5 times faster than on its x86 counterpart

In short, the memory on the t4g.small equipped with a Graviton2 processor was 3 to 5 times faster.

Just looking at the performance and the instance price the conclusion is that the switch to the ARM-based instances is a no-brainer. You get more power for less money.


The big question when switching processor architectures is compatibility.

I found that a lot of things were already recompiled for the ARM processors. Mainly, Docker was available as .rpm and .deb and so were most of the images (yes, they need to be built for different architectures). Docker-compose, however, wasn't. Which was a huge bummer for me. I had to jump through some hoops building several dependencies from source code to make it work. The situation will hopefully improve in the future as the ARM adoption on the servers grows, but right now you might pay more in working hours than you save by migrating.

The RDS (AWS managed RDBMS service) on Graviton2 is where I think the real win-win is. You don't have to do any setup and get all the benefits of an ARM processor on your server.

We also made sure is easy to run on ARM processors and introduced multi-arch docker images and made pre-built ARM AMIs available on AWS.

Debugging with Recap.Dev - a Case Study

Arseny Yankovski

Arseny Yankovski

Lead Architect @ eMarketeer

We created the out of our need for better tracing tools. I personally use it in both personal and professional projects. Here's an example of how I used to fix a bug in an experimental feature in one of my personal projects.

It all started when I was routinely going through error notifications sent to my Slack.

A error message saying the database table doesn&#39;t exists