6 minute read

This being my first post and all: Welcome! I hope that you find some interesting content here, if not - I am open for suggestions. Let’s start off with some more tech-related stuff:

Other than Python, I mostly prefer Scala for personal (and sometimes professional) projects since I really like its (although complex) elegance, static typing, and the functional approach to programming. So to get on top of current frameworks and patterns I decided to use Scala once again.

This project will be about a simple web service / web application that lets a user manage their tasks and with this blog post, I am going to cover its foundation: The data layer. As you may have already suspected, I will be using a Multitier architecture to create a RESTful web service that communicates with an AngularJS single-page app as a front end. The goal for this post is to evaluate some ORMs and do a proof of concept implementation that the selected framework and crate-jdbc work together within a readable (and testable) architecture.

The Database

Since I am working at [Crate.IO][] the database was not much of a choice :). Crate is a distributed SQL database built with Elasticsearch - which means we can use an old-fashioned object-relational mapper! However the SQL support is limited (currently no transactions, no AUTO INCREMENT) - which significantly narrows down the field (under the hood most of the mappers use one or the other …). Even more so since Crate’s JDBC driver likes to throw NotImplementedExceptions …

The Mapper

During our Snowsprint at the end of January (in Hittisau, Austria) I started off by selecting an OR-Mapper/JDBC-abstraction…

… of which there are not many:

Now I did some work with Slick before but I did not like it so much:

  • Imports are database dependent (this really messes with loose coupling): import slick.driver.H2Driver.api._
  • Many of the provided features (streaming, …) would not make sense with Crate

Skinny ORM/SORM looked very promising, however their SkinnyCrudMapperWithId seemed to insist on using AUTO INCREMENT which Crate doesn’t support :(. In addition to that, development does not seem very active on their Github repository.

So I resorted to using ScalikeJDBC which is the foundation of SORM anyway and also lets me write plain SQL (it is also pretty active)!

The Code

In Scala, I really enjoy an architectural pattern called the ‘Cake Pattern’ which effectively adds dependency injection with checked prerequisites at compile time. I am also going to use the DAO pattern, since it lets me tug away SQL nicely.

So first, I have created traits for the components: DAO with a BaseDAO to reduce boilerplate code and a DAOComponent to wrap it:

In order to get an instance of the DAO a call to the userDao() method will do the trick. By the way, the User looks like this:

With the interfaces set up - let the implementation begin!

First (SQLUserDAOComponent.scalaline #7) I want to have a convenience object for parsing query results where I can add my table name (and for sanity I will also define a schema). As you may have noticed the name of the object is also User, which is why I imported the full xyz.x5ff.bklg.common package (SQLUserDAOComponent.scala line #3) (this is also nicer than renaming imports).

The next structure is a trait that extends the previously defined UserDAOComponent “interface” (or rather: trait). First, we explicitly denote the types this component requires by using ‘this’ (which also prohibits the trait from being instantiated without this type mixed in - at compile time).

Within that trait a class SQLUserDAO is defined which implements the actual DAO interface and its methods. For simplicity, I am going to use scalikejdbc’s AutoSession for now, and since Crate does not support transactions (yet?) it’s actually better to avoid them (UnsupportedOperationException).

The first method to be implemented is save on line #31: By using scalikejdbc’s DSL I can insert into the database with type checking.

Currently I have no way of tracking whether an object is already in the database (and thus needs updating) so this is one thing on my to do list :) - so returning the original object after inserting is fine.

After that createTable should take care of the initialization of the table, this time with a template string that is converted to an SQL literal. This is due to scalikejdbc: When using templates with their SQL interpolation, it replaces the interpolated value with a ‘?’. Handy for parameters, not so much for table names…

The method del removes an object (and currently will fail if no such object was found).

For selecting an object getByName (line #51) tries to find an object with the provided name (which is also the primary key). Again the scalikejdbc DSL comes in handy, but by using the syntax method scalikejdbc puts aliases on the result columns to map them later on (see output below). Hence it is required to use map(User(u)).single.apply() on the resulting QuerySQLSyntaxProvider[SQLSyntaxSupport] instead of passing the mapped value into the object.

Here’s how to instantiate and run the code:

As a result of the cake pattern, val userDaoComponent = new SQLUserDAOComponent with CrateSQLDriver {} provides a nice way to mix together a component with the compiler checking if it fits :). If a different driver would implement the same interface as CrateSQLDriver, the database could easily be swapped. After compiling and running everything this is the (truncated) result:

... [SQL Execution] create table if not exists bklg.Users (name string primary key); (87 ms)
... [SQL Execution] insert into bklg.Users (name) values (''); (5 ms)
... [SQL Execution] select as n_on_u from bklg.Users u where name = ''; (2 ms)
... [SQL Execution] delete from bklg.Users where name = ''; (2 ms)
... [SQL Execution] drop table bklg.Users; (34 ms)

The SQL statements are very comprehensible and the DSL adds column aliases for mapping. Also: it works!

The End (?)

Afer some hick-ups with OR-Mappers the database layer is up and running nicely - though the real issue will be maintaining it, since text-based SQL and the counter-part of scalikejdbc don’t mix so well. However, this is something that will be solved in a later iteration (once the data model is a bit more mature) and for now it lets me exploit the full feature set of Crate. Ultimately I think it is necessary to use something very low-level until the features of Crate evolve - and scalikejdbc seems like a very good choice. It has a nice documentation, fits into a more complex architecture and does not rely on Scala-specific features (implicits) much.

And next time there’s going to be … more (maybe)

image credit (header):