Designing APIs with JSON API Specification

JSON API specification is a tool that greatly helps and simplifies the process of designing RESTful JSON based APIs and very frequently serves as great anti-bikeshedding tool that limits unnecessary discussions in any team.

In my opinion it doesn’t get it’s deserved attention and this post aims to provide good introduction to that topic.

My goal is also not to repeat official documentation, but rather try to add something new to the topic. The specification is also quite readable, so I suggest reading it if you want to learn more.

What is JSON API

JSON API is a specification describing multiple aspects of how a proper RESTful API should look like, it was released in 2015.

Following aspects of API are covered by it:

  • URL Structure
  • How a request should look like
  • How a response should look like
  • How to update/delete/create resources
  • How errors are reported and how they should be handled
  • Advanced features like server side includes
  • And many more aspects

Motivation For Using JSON API

The team was tasked with designing and implement new API for general use. At the time it wasn’t clear what the client will be, so it wasn’t possible to create a API that meets requirements of specific client, API had to be generic enough to serve many different clients with various use cases.

After initial research the team members divided endpoints to implement among themselves and everyone quickly went to implement their own endpoints and problems started to appear, to put things simple, API was inconsistent:

  • Different endpoints used different ways of passing arguments
  • There was no standard around reporting errors
  • Some endpoints lacked pagination, some had support
  • There was no consistency among returned resources – same resources were named differently, different resources had the same names
  • No consistency among identifiers, dates, numbers, ordering, sorting, etc
  • Because of poor tooling it was almost impossible to detect problems with the documentation
  • The documentation was incomplete or just wrong

Does it sound familiar?

Why You Should Follow Specification

There are several traits in JSON API specification that make it very fitting modern RESTful APIs.


Frequently you might be faced with trivial decisions to make, but somehow you end up with the situation that because the matter is that trivial/simple everyone (on the team) has their own idea how particular feature should be build.

For example ‘pagination’: Should we call pagination query parameters offset and limit or page and size?

JSON API specification already has all those decisions made for you, so instead of focusing on irrelevant issues, sometimes even nitpicking, you should just accept whatever the specifications says and move on to working on the important issues.

It’s true that sometimes a specification might make a choice that you don’t agree with, but in my opinion it usually is more important to accept it instead of focusing on irrelevant details.

Iterative Development

I’m in favor of API first approach, the process is more or less as follows:

  • Given business requirements you prepare few examples of API responses
  • After verification (or at the same time) you prepare API request/response schemas, tweak examples to match schema
  • At this time you can plug in those endpoints as part of CI process, also it’s possible now to start developing client side API interactions
  • Once schema for your endpoints is stable you can start to implement real server responses

Because at every step in the process there’s extensive testing in place (which requires setting up if you don’t have it), as soon all your tests pass you should be able to transition to the next step pretty painlessly.

JSON API fits in that model quite well, the specification comes with JSON Schema (see next point).

Additionally if you don’t unnecessarily constrain yourself when designing your API specification, it should be quite easy to always add or change some aspects of the responses your server is returning.


JSON API comes with it’s own formal description specified as JSON Schema. You can validate your API design against it:

  • during your development process
  • as part of CI process

Also you can take it step further and validate your API responses against it, there are tools that can serve as HTTP proxies that proxy though all requests and responses validating their schemas along the way, usually they could be plugged in as part of integration test suite executed on CI server.


When designing JSON API compatible API, I think the top priority is to establish what resources your system has and what are the relationships between them.

In this and next section I’ll be using following API as an example: – I suggest to clone it locally and play around, I’ll modify examples in this post to improve readability. It’s probably not the perfect design of the API, but it’s useful as learning example.

Tip: Use tools like Postman to browse API, Postman makes all relative URLs that are returned by the server clickable so you can very easily browse API just by pointing and clicking at parts of the server response.


HATEOAS (Hypermedia as the Engine of Application State) is the concept that constrains REST in which the all client interaction with the server happens entirely through server provided dynamic links. In ideal scenario this would mean that the client can be “dumb” by not making any assumptions about the application it’s interacting with and all logic could be performed by the client.

There are several features (which will be covered in the next points) that make it possible to create a Hypermedia-style applications based on JSON API


When designing RESTful APIs the most important concept are the resources. The example API has 3 types of resources:


Let’s analyze it in some detail:

  • id – identifier of the resource
  • type – identifies a type of the object. A ‘type’ would have it’s own specification which would list all required and optional attributes, what relationships an object can have, etc
  • attributes – a list of all attributes this resource has, in our case there are 5 different ones
  • relationships – identifies how this resource is linked to other resources in our system, here we have links to books and photos
  • links – a link used to fetch that resource, if this would have been a collection of resources, we would have pagination links here

A book object looks also quite simple. Note that it has a lot more relationships that allow you to traverse the API.


Chapters is a simple resource type, included here for completeness.


Relationships is another very important concept, they allow us to traverse resources in our system and because all links are always created dynamically, the client doesn’t need to hard-code any of them.

Each of the relationships.self links can be followed, for example by going to /v1/authors/1/books a client will get a collection of books written by author with id 1.

Now let’s take this one step further, what if I would like to get an author and all books written by him? Normally I would need to make 2 requests to /v1/authors/1 and /v1/authors/1/books.

JSON API specifies a feature called “server side includes” which allows me to combine those 2 requests into one as follows: /v1/authors/1?include=books complete response

Here’s a snippet of the response:

Let’s analyze the included section:
This attribute is a collection of all books written by that author, each of the included resources has it’s type that allows us to decode what type of the resource it is – there’s nothing from preventing us from including more types of relationships, for example you could include both authors and stores in the following request: /v1/authors/1?include=books,stores.

See also what happened attribute, the data attribute wasn’t available before, it appeared only after explicitly including that relationship – this will help with matching which resource included what “included resources” because included section of the response is a flat collection.

Examples summary

I have shown 2 basic concepts of JSON API: resources and relationships.

The specification defines much more detail in the following aspects:

  • Sorting
  • Pagination
  • Error handling
  • Filtering
  • Creating and updating resources
  • And many more

I didn’t cover them here because those concepts are very straightforward to understand once you learn about resources and relationships, but you can read more in the specification.



There are multiple client and server side libraries, official list lists at least few libraries per language/technology. There’s plenty to choose from depending on your preferences.

Some of notable examples I have used or tried:

Take a look at examples section which shows few example implementations which offer interactive tools to browse and play with the API.

Here’s an example of how you could encode a Author class using scala-jsonapi from the example response shown before:

Editor support

I have had most success using Visual Studio Code with ‘plain’ JSON Tools plugin. In general you don’t need any special tools or editors to design in this format – it’s just JSON.

JSON Schema

JSON Schema allows you to validate your JSON documents against the schema (formal specification). There’s official JSON Schema provided as part of JSON API specification, meaning that your document can be validated against it.

As long as your API documentation passes schema validation you should be quite sure that JSON API clients shouldn’t have any problems with accessing your resources.

Following snippet is an example of how you could validate UUIDs with JSON Schema:

Problems With JSON API

JSON API as any significant tool comes with some drawbacks, I have spent around 8 months with it (as API designer and back-end developer) and here’s what I was able to observe.

Generic API

One of the biggest problems with JSON API is that the API you are designing in most cases will be or will end up being quite generic, I think this comes from the fact that you are thinking in terms of resources while designing it, because usually the actions reflect more specific requests from the clients.

The good thing is that this design will cover a lot of use cases and satisfy a lot of clients, but as result some clients will have to do additional work in order to get what they need.

Note that this isn’t necessarily a disadvantage, especially in scenario when the client isn’t known or doesn’t exist yet, in those cases you need to stay generic and specialize later.

Weird Workarounds

JSON API is quite a restrictive specification, you are not allowed to do many things that are easy to do when there’s no specification preventing you. After a while you will be able to develop few patters how to work around those limitations.


There might be cases when you need to compensate for the fact that your API doesn’t expose ability to perform actions and only operate in terms of resources.

The most common scenario is that when client wants server to perform some work, instead of sending POST request with that action, the client needs to add document to a work request collection. The server in turn needs to return resource location in the work completed collection and client should send GET request to that collection to receive it’s result. In this case instead of single POST request you sometimes will need to make 2 or more.

Image Upload

Another, very common scenario is uploading images – something a lot of APIs allow clients to do and take for granted. This is problematic because both client and server must use media type application/vnd.api+json which prevents you from sending image/png, see this SO link: for suggestions.

Slower Pace

As every tool that requires more work the pace is slower at the beginning. This factor is even multiplied when the team hasn’t worked with JSON API or any similar specification before.



GraphQL serves as an alternative to RESTful approach (not just JSON API). Compared to JSON API it has very similar area that both projects try to cover, but GraphQL does things very differently in terms of how queries and responses look like.

I think one aspect to stress is that GraphQL works by embedding child resources as a recursive tree, whereas in JSON API the embedded (included) resources are always included as a flattened list, making it slightly harder to interpret.
IMO this is a great advantage of GraphQL and I’m very interested in trying it out in the next project.

Swagger / Open API / RAML

I decided to put those things into single category because they serve similar purpose. In majority of the cases you can think of them as complementary to JSON API. These tools serve as a way to help you design API and generate documentation (Swagger goes even further with automatic API playground).

They (at least in their base form) tend to focus on generating documentation, but don’t try to impose any formal requirements or specification how either API endpoints, requests or responses should look like. Therefore they are very versatile and can be used on their own when designing API without any specification.

My suggested approach is to:

  • Design your API according to (JSON API) specification
  • Use Swagger or RAML at the same time to give your API documentation a structure
  • Generate pretty HTML documentation out of RAML or use automatic Swagger tool to do this


My post is a introduction to the topic of JSON API, and I think this tool is a great way to solve many problems around API design.

I think it didn’t get enough attention in the past and still doesn’t, additionally there seem to be a lack of support from commercial users, and newer tools like GraphQL or Swagger are picking up the market. According to the official website, last release of JSON API specification was in 2015 and there were no update since.

The JSON API has many benefits I think I have been able to point out and in my opinion it still is a good tool to help design large APIs that will live for a long time, with many different clients integrating. At the same time it’s true that for smaller project or those with short lifespan, JSON API would most likely be blocking you from making progress and just sticking with plain RAML or Swagger while applying some best practices would be better.

I’m very interested in that space and watch it closely.

Why You Should Adapt Gitflow

GitFlow is a convention that says how you should structure your development process. It tells you when you should branch and merge your changes and helps you with hotfixes. By organizing branches effectively it unblocks developers allowing them to stay effective

Key characteristics

Work happens in parallel

Each developer is free to work on his individual features, all is required to branch off develop branch and once the feature is done create a PR back to develop
At later point in time current state of develop is picked up and new release is created.

Focus on releases

GitFlow focuses around releases (see comparison to GitHub Flow below), this means that it’s best suited when you release your code often, but at the same time not too often.
There is some (small) overhead related to organizing a release, so in most cases you would be releasing at most once per day (just a rule of thumb)

Clear separation of ongoing work vs hotfixes

Hotfixes that affect live application don’t interfere with ongoing work.

Most teams struggle with this in simpler workflows (or when there’s no workflow), especially when each PR after review is merged back to master and release of the master happens after a while.

In case of emergency they need to figure out what exactly version is running live, and branch off at that specific moment in the master branch, this is messy and always ad-hoc.

GitFlow deals with this cleanly because there’s one more level of separation between ongoing work, staging area (release branches) and production branch (master)

No more ad-hoc decisions

GitFlow is simple but at the same time quite comprehensive model/convention to how you and your team should structure your development process.

It can be compared to the application framework which tells you in which folders you should place specific files, GitFlow tells you when you should branch and merge.

Reading material

There’s already a good amount of reading material about it, so I’m not going to repeat those resources:
* – good and short introduction to GitFlow
* – article that introduced GitFlow to the public
* – simplified version of the model: GitHub Flow
* – a comparison of few Git workflows

Note about GitHub Flow

GitHub flow in general is very similar to what most people do out of the box with git, but with one exception. GitHub Flow requires you to deploy master to production after each merge. In my experience most teams are not ready to adapt it, or feel uneasy about it, therefore GitFlow might be a better convention for them.


There is a whole tool dedicated to GitFlow which might be useful for some people. In my experience I’m happy with just using hub to help with creation of branches and PRs:


I think that once a development team reaches certain size you should introduce a workflow to organize your development.

GitFlow seems like a good choice, it’s well documented, widely used, easy to understand.
It brings along a small overhead but there’s always a price to pay for having a structure. I think it’s definitively worth to have some structure versus ad-hoc decisions, especially made at stressful moments, like hotfixes on production.

Remote Work – 5 Years Later

This month it will be my 5 year anniversary as a remote worker. During that time I have been working for different companies, some were fully remote, some were partially remote. In some cases I have worked with teams with almost no time overlap (9 hours difference), in some cases all of us were in the same time zone.

This post is an attempt to summarize my thoughts from the 5 year perspective, but also I’d like to share tips for both people who are remote members in their teams as well as for people who are working exclusively on-site while some of their peers are remote.

Remote work is here to stay

I think right now remote working is one of the biggest trends shifting the IT industry, more and more companies are open to hire people working remotely. 5 years ago the situation was very different, remote working was something a lot of people didn’t hear about and as potential hire it was nearly impossible to convince your potential employer to hire our like that.

Right now you can apply to many companies that don’t explicitly advertise that they are hiring remotely and very likely they won’t reject you right away.

Over that time we have also seen a massive amount of new tools aimed at distributed teams, for example: Slack, HipChat, Status Hero, etc; without them working remotely wouldn’t be as effective as it can be.

Communication is the key

Over the 5 years are remote worker, I have spent around 2-3 months working in the offices with people on my teams, in most cases I was invited to visit the office and meet with everyone in person.
Experience like that has helped me to adapt to remote environment, here are the most important observations I have for remote workers:


Make sure that everyone is in the loop about the things you are working on, daily standups definitely help – this is a chance to synchronize whole team, but as remote worker you need to take this further.

Ideally you are using issue tracker to track your progress, I use it a lot to write down ideas or comments as I work on the related issue, this creates a sort of log of my progress through the issue – which makes your work visible to others, people who are subscribed or stumble on this in any other way can add comments while you are working on stuff.
What’s more: It’s very beneficial to setup filters in your issue tracking tool and (periodically, for example daily) go through the issues other are working on and offer your input where you think it would be beneficial

It also helps to write a more or less detailed plan of your work before starting to work on individual issues.

Be responsive to others

Make sure to periodically go through all communication tools your team uses. I usually try to check email 3 times per day (morning, lunch, before going offline), but I keep HipChat open all the time and scroll through it during short breaks when my project is compiling 🙂 . I’ll also get notified when someone mentions me or sends me direct message.

Additionally I think you should treat PR review requests with very high priority, often someone might be blocked waiting on your review.

Proactive communication

This point is especially useful when there’s little time overlap between time members or similar situations. To avoid blocking others or being blocked yourself try to be proactive when asking questions or answering them, for example:

If you are about to ask someone how you should design particular piece of code, instead of asking question right away, try to come up with few approaches you might take, write down shortly what pros and cons they have and suggest which one you think is right.

This helps to greatly reduce the need to go back and forth when communicating – remember that it might take more than 10 hours to get the response.

Advice for on-site workers

All the points above are also very relevant to on-site workers, and I think there are few additional things that on-site workers should pay more attention to.

I’m sure you are already familiar with chat tools like HipChat or Slack, what you might be missing is the opportunity to set your statuses to match your availability, for example make sure to set auto-away to 5 or 10 minutes of inactivity, this way when you leave your desk, remote people will notice your status and won’t keep pinging you to reply. Also enable “away” status to set when you lock your device.

If your team has regular standup calls when everyone gathers around single person in the office and all remote people are dialing in, a good quality microphone and camera is a must. Ideally get a microphone that’s designed to pick up human speech and cancels noise.

It might be very useful to experience being a remote worker yourself, I think it’s most effective if for example your team decides on having “Remote Fridays” (or any other day of the week).


As you can see, majority of my focus was on efficient communication, I believe that this is the most important point to focus on when building or living in a remote team.

Over last 5 years I think I have made great progress with respect to my communication habits and I’m convinced that there is still a lot to learn and grow.

I’m looking forward to next 5 years as remote worker.