Remote Work – 5 Years Later

This month it will be my 5 year anniversary as a remote worker. During that time I have been working for different companies, some were fully remote, some were partially remote. In some cases I have worked with teams with almost no time overlap (9 hours difference), in some cases all of us were in the same time zone.

This post is an attempt to summarize my thoughts from the 5 year perspective, but also I’d like to share tips for both people who are remote members in their teams as well as for people who are working exclusively on-site while some of their peers are remote.

Remote work is here to stay

I think right now remote working is one of the biggest trends shifting the IT industry, more and more companies are open to hire people working remotely. 5 years ago the situation was very different, remote working was something a lot of people didn’t hear about and as potential hire it was nearly impossible to convince your potential employer to hire our like that.

Right now you can apply to many companies that don’t explicitly advertise that they are hiring remotely and very likely they won’t reject you right away.

Over that time we have also seen a massive amount of new tools aimed at distributed teams, for example: Slack, HipChat, Status Hero, etc; without them working remotely wouldn’t be as effective as it can be.

Communication is the key

Over the 5 years are remote worker, I have spent around 2-3 months working in the offices with people on my teams, in most cases I was invited to visit the office and meet with everyone in person.
Experience like that has helped me to adapt to remote environment, here are the most important observations I have for remote workers:


Make sure that everyone is in the loop about the things you are working on, daily standups definitely help – this is a chance to synchronize whole team, but as remote worker you need to take this further.

Ideally you are using issue tracker to track your progress, I use it a lot to write down ideas or comments as I work on the related issue, this creates a sort of log of my progress through the issue – which makes your work visible to others, people who are subscribed or stumble on this in any other way can add comments while you are working on stuff.
What’s more: It’s very beneficial to setup filters in your issue tracking tool and (periodically, for example daily) go through the issues other are working on and offer your input where you think it would be beneficial

It also helps to write a more or less detailed plan of your work before starting to work on individual issues.

Be responsive to others

Make sure to periodically go through all communication tools your team uses. I usually try to check email 3 times per day (morning, lunch, before going offline), but I keep HipChat open all the time and scroll through it during short breaks when my project is compiling 🙂 . I’ll also get notified when someone mentions me or sends me direct message.

Additionally I think you should treat PR review requests with very high priority, often someone might be blocked waiting on your review.

Proactive communication

This point is especially useful when there’s little time overlap between time members or similar situations. To avoid blocking others or being blocked yourself try to be proactive when asking questions or answering them, for example:

If you are about to ask someone how you should design particular piece of code, instead of asking question right away, try to come up with few approaches you might take, write down shortly what pros and cons they have and suggest which one you think is right.

This helps to greatly reduce the need to go back and forth when communicating – remember that it might take more than 10 hours to get the response.

Advice for on-site workers

All the points above are also very relevant to on-site workers, and I think there are few additional things that on-site workers should pay more attention to.

I’m sure you are already familiar with chat tools like HipChat or Slack, what you might be missing is the opportunity to set your statuses to match your availability, for example make sure to set auto-away to 5 or 10 minutes of inactivity, this way when you leave your desk, remote people will notice your status and won’t keep pinging you to reply. Also enable “away” status to set when you lock your device.

If your team has regular standup calls when everyone gathers around single person in the office and all remote people are dialing in, a good quality microphone and camera is a must. Ideally get a microphone that’s designed to pick up human speech and cancels noise.

It might be very useful to experience being a remote worker yourself, I think it’s most effective if for example your team decides on having “Remote Fridays” (or any other day of the week).


As you can see, majority of my focus was on efficient communication, I believe that this is the most important point to focus on when building or living in a remote team.

Over last 5 years I think I have made great progress with respect to my communication habits and I’m convinced that there is still a lot to learn and grow.

I’m looking forward to next 5 years as remote worker.

ETags in Akka HTTP

I have recently been involved in implementing ETags and Last-Modified header support in one of the services based on Akka-http.

I have prepared a quite comprehensive example project that shows how to implement those capabilities in Akka-http based projects.

In this post I’ll describe in a practical manner what ETags are and how to support them in your own projects.

Side note: I’ll focus on ETags and have a section on Last-Modified header at the end.

Quick introduction to ETags

ETag is basically a additional HTTP header that’s returned by the server that can be treated like a checksum of the response.
The client can later use this value when sending consecutive requests to the same endpoint to indicate what version of the resource it has seen before.

Based on the value of ETag provided by the client, the server can decide not to return the HTTP body, and indicate this by returning HTTP code 304 – Not Modified.

When client receives back a 304 response this means that the resource that client has received previously is still up-to-date and there is no need to send it again by the server.

Note that this approach requires the HTTP client (or library) to keep cached responses on it’s side, and in case of 304 response, the data should be read from there.

Wikipedia has a very good article on ETags

Note also that the server sets value of the ETag header, and clients should use If-None-Match

First request

And the body follows

Second Request

Second response doesn’t have any body

Third request

After the resource on the server was updated:

And the new body follows

I encourage you to download the sample project I have prepared: which will allow you to try out those commands yourself, I have also added more debug logging to see how your request flows through on the server side.

Implementing ETags support

Akka-http has already implemented a conditional directive that allows us to use ETags quite effectively

So what’s left for us is to properly include it in our routing and pass correct arguments.

In my sample project there is a class BooksApi

This is most important snippet – I have written comments inline:

A word about Last-Modified header

In some cases ETag and Last-Modified value could serve the same purposes, even in my project I calculate ETag based on lastUpdated date, because I know that each time a resource changes, it will also update lastUpdated date.

Last-Modified is sometimes simpler to understand, but it’s not as universal as ETags, here are some cases were it won’t work but ETags would:

  • Collections
    When collection has multiple resources, we can calculate combined ETag as concatenation of all ETags of individual resources, and then hash them with MD5 (or any other hashing algorithm)
    In case of Last-Modified we don’t have a way to do this, because if we look at Max(Last-Modified) for all elements, we won’t notice for example removal of elements from collection
  • Modification date is not available
    There are resources which don’t have information about modification date, they can change any time as well. In those cases ETags are the only option


Support for ETags and Last-Modified header is quite easy to add for Akka-http projects.

I have shown how to add this to a single endpoint, the drawback is that it makes code much more verbose as there are 3 cases that require handling, each requires different path the request needs to go through.

This probably could be eliminated by defining more generic function that can encapsulate all the logic, but I decided to leave this out for now.

Notes on creating microservices-based applications

This post is a collection of tips and notes I gathered while working on microservices-based applications for last couple of months.

The notes are divided in a couple of sections that focus on the different areas during development and running your services.

I have decided to write more low level notes/tips to focus on specific problems, for more high-level overview see: The Twelve-Factor App

Project Setup

  • Each service should be a self-contained project, hosted in a separate repository.
  • The microservices shouldn’t have any code level dependencies on each other
    • For example they shouldn’t depend on each other during build time
  • All dependencies should be factored into separate libraries
    • Also keep them as small as possible
  • Ideally only dependencies you have should be the open source libraries that you use
    • As a workaround, you can also open source your own libraries
  • The should have some basic description of what the project does, what are steps to start developing the project
  • Ideally you should have instructions on how to run the project inside the docker container
    • This will help other developers but also if you use something like Kubernetes it will help down the line
  • After adopting docker as main tool to deploy the code, you should create appropriate repository in ECR or Docker Hub to host your containers


  • Apply API-first principles
  • Use a widely supported tools like RAML or Swagger to design your API endpoints and schemas first
  • Iteratively implement new endpoints, replacing static examples of the responses with live endpoints
  • Setup infrastructure to validate your schemas
    • Integration testing seems like a good step, your schemas can be validated as a “proxy” during testing


  • Make sure that you handle error responses by other services or applications you depend on
  • Make sure that you set correct response type – HTTP Header
  • You should also handle API versioning, ideally this should be done on the higher level as well
  • Add support for X-Trace-Token, make sure to pass it around as you make further HTTP requests to other services
  • Also add X-Trace-Token to all log messages.
  • Ideally you could implement a Zipkin-like service to help with that


  • Your services should have a standard health check endpoint
    • You should standardize on what data is shown there
    • Format should be readable by the monitoring infrastructure
    • During health-checking, the service should send ping requests to all services it depends on and report status of those connections
  • You should have also tools to perform instrumentation / metric collection
    • Tools like Prometheus, NewRelic, Grafana or similar can be very helpful here
  • Logging should be written to standard output
  • Error logs should be written to standard error
  • Those logs should be captured by the tooling around docker containers (like Kubernetes) and redirected to Kibana or similar tool


  • Make sure that you set sensible defaults for all configurable parameters
    • For example the defaults should allow you to run service on localhost for development
  • Configuration that changes in each environment (for example testing and production) should be read through environment variables
    • Which could be also configured by the Kubernetes or alternative approaches
  • Configuration shouldn’t change while your service is running
    • It’s better to design for applications that can quickly restart and apply new configuration than have long running processes that can change their config


  • Set a reasonable timeout for all outgoing calls you make
    • Also consider implementing circuit breakers like Hystrix to improve resiliency even more by avoiding cascading failures
  • Make sure your application can continue running while services it depends on are down
    • Make sure your application doesn’t require any manual administration when dependencies are down and later start up
  • Your service should start up even when dependencies are not available
    • For example you shouldn’t make any pre-startup checks if database is connectable
  • Make sure that increased rates or complexities of incoming requests won’t kill your application
    • Implement measures to protect your service from abuse
    • For example set a maximum page[limit] to avoid making heavy database calls or to limit response size
  • Setup error reporting service
    • Services like Airbrake or Rollbar will notify you of any errors that your service generates


  • Services should follow shared-nothing practices
    • You shouldn’t directly modify state of other services or databases that you don’t ‘own’
    • You also shouldn’t allow other services to modify your internals state
  • Service should be effectively state less
    • All durable state should exist in the database
    • Caching is OK, but your service should function correctly without it
  • It should be possible to start more copies of your service without modifying existing ones
  • Prefer horizontal scalability over vertical one
  • Don’t use mechanisms like sticky sessions
    • These usually can prevent you from handling load evenly among instances of your service


  • The gap between testing and production environments should be a small as possible
    • Ideally these environments should differ only by environment variables and scaling
  • Setup a traffic mirroring service
    • A portion of your live production traffic could be sent over to testing environments
    • This will allow you to spot bugs more easily
  • One-off admin processes that need to be run during deployment should ideally be automated
    • Or at least those scripts should be bundled with your application
    • For example: database schema migrations