Running Tor Node Inside Docker

Over last few weeks I have been playing around with creating docker container to host a Tor node on one of my VPS servers.

As a result of those efforts I created a Github repository: https://github.com/wlk/docker-tor-relay with the Docker image, it’s also hosted on DockerHub here: https://hub.docker.com/r/wlkx/docker-tor-relay/ (and configured as automated build)

Both DockerHub and Github README provide enough information on how to use it, so I won’t go into details here, but I’ll focus on 2 Docker features that I used in this toy project.

There are many Tor images out there, but in my case I have made 2 changes.

Mounting volumes to persist state between container restarts

This is a well known Docker feature, you can share local volumes (directories or files) with the container and configure read or write access.
This way when you restart the container or start new image with the same container, as long as you setup the same mount options your files will not be lost.

In this project I’m using volumes to persist Tor configuration and state across restarts, you can read more about this feature here: https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume

Running Docker container in a separate network

To achieve better security and isolation of the containers running on my host (remember that Tor container can be a one of many containers running on the same host) I have setup separate network to host Tor containers.

This feature is documented here: https://docs.docker.com/engine/userguide/networking/ if you’d like to know more, but I’m going to show how I configured it in my project:

  • Create separate network:

This creates a separate network named tor_network, you can see that it was created correctly:

The bridge is the default one, so all containers started without network parameter specified will run in the same, default network

  • Run container in isolated network:

Parameter --network=tor_network specifies that the container will be run in tor_network

  • Verifying that it works

To check if this is working as expected I have inspected 2 containers on separate networks and I saw their IPs as:

As you can see, these containers are running in separate networks without any connection between them, executing ping from one to the other fails with timeout, same for any network connectivity.

Summary

Using more advanced Docker features I was able to achieve better isolation between containers and in result better security.
The docker-tor-relay is a very straightforward and save way if you’d like to help Tor network by running your own relay node.

Simple way to create Scala scripts

This post is a description of a small project idea developed by a friend of mine: Przemysław Pokrywka, I’m just writing down the idea as a blog post.

There are many ways one can execute Scala code, most people use sbt to create a some kind of build, for example fat jar or something similar or just sbt-native-packager to build the application in more native formats.

But what options do you have in case you want to write Scala scripts?

You can use the scala command to execute something, for example like this:

This is quite useful when you are using only the standard library, but when your script requires more dependencies you have to figure out how to properly manage them, and very quickly this becomes troublesome

You can achieve similar results by using Ammonite, especially with the “Scala Scripts” extensions. I find this a little bit troublesome, especially because it makes it harder to edit files inside IDE.

The option I’m suggesting allows you to take this one step further:

  • Only single file, that’s both valid Bash and Scala
  • Manage dependencies via coursier
  • IDE support with sbt
  • only Bash and JVM required to run

The whole script is available here: https://github.com/przemek-pokrywka/play-framework-app-in-a-single-file

You can execute it now and you should see following output:

After going to the url you should see this:

So by just running a simple Bash script we were able to start very simple Play application and accept HTTP requests!

I’m going to go over it step by step and explain everything.

Line 1:

First line of the file added because Bash attempts to execute it

Lines 5-7

This downloads coursier library if it’s not available

Lines 9-14

This is a normal Bash array that stores Scala dependencies, we’ll be using it later. Note that we are actually using ammonite-repl here to execute the script.

Lines 16-20

This allows you to generate build.sbt file with libraryDependencies section so that you are able to import and edit the script inside IDE and all dependencies will be resolved correctly.

Lines 22-24

Ammonite uses caching mechanism to prevent unnecessary recompilation, also we are saving evertyhing after object script into the temporary file

Line 26

This is where the magic happens: The cr fetch command downloads all dependencies (if they were not downloaded before) listed in the previously defined Bash array. The output of the command is a list of those JARs, this output is captured in the CLASSPATH variable, which we’ll be using in the next step.

Lines 27-31

We are executing straightforward Java process (with CLASSPATH from the previous step), we are starting Ammonite REPL feeding it the scala part of the script

Line 33

This prevents Bash from processing the rest of the file.

Lines 36-57

This is the actual Scala code that we are running.

Applications

The most immediate application for a script like that is some sort of a quick start guide where by just running one Bash command you can run some Scala code or setup environment for development.
Also it should be suitable for running more advanced Scala CLI scripts that fetch multiple dependencies and the scripts are not intended to be used for a long time (if you happen to have that scenario, you might be better of by generating a fat jar)