A generic asynchronous database access API

For quite some time it’s been bothering me that many excellent relational database access libraries for Scala use JDBC underneath. Even those with an asynchronous interface, like Slick, must exercise a pool of threads that are blocked on I/O actions.[1]

I realized that the community could benefit from the generic, low‑level, asynchronous database access API. The great libraries out there, like aforementioned Slick, could target this API in the way they currently target JDBC. Because I thought it was an interesting thing to do I went ahead and started designing the API along with implementing a PostgreSQL driver for it. I also decided to do a Java API at the same time.

It’s been like 2 years since I started slowly progressing with this and I feel now it’s time to announce it for the feedback. It’s called rdbc. The sources are available on GitHub. There’s also a fairlya complete documentation.

Goals and non-goals of the project

After a couple of weeks of work goals of the API emerged:

  1. Provide vendor‑neutral access to most commonly used database features.

    The API is meant to be vendor neutral in a sense that if clients stick to using only standard SQL features no vendor‑specific code should be needed and database backends can be switched with no client code changes.

  2. Be asynchronous and reactive.

    All methods that can potentially perform I/O actions don’t block the executing thread so the API fits well into non‑blocking application design. rdbc allows building applications according to the Reactive Manifesto.

  3. Provide a foundation for higher‑level APIs.

    rdbc is a rather low‑level API enabling clients to use plain SQL queries and get results back. While it can be used directly it’s also meant to provide a foundation for higher‑level APIs like functional or object relational mapping libraries.

I also decided on one non‑goal which is:

  1. Will not offer full type‑safety.

    rdbc works on a SQL level, meaning that requests made to the database are strings. There is no additional layer that would ensure type‑safety when working with SQL. The API is also meant to be dynamic and allow type converters to be registered at runtime. This approach sacrifices some type‑safety but at the same time makes it possible to implement a wider range of higher‑level APIs on top of rdbc.

A basic example

Just to give you a general idea, below are simple code snippets that demonstrate the basic usage of the API. The code selects a name column from users table and returns the result as a collection of strings.


val db: io.rdbc.sapi.ConnectionFactory = /*...*/

val names: Future[Vector[String]] = db.withConnection { conn =>
  conn.statement("select name from users where age = :age")
      .bind("age" -> 30)
      .executeForSet() // Future[ResultSet]
      .map { resultSet =>
         resultSet.map(row => row.str("name")).toVector


io.rdbc.japi.ConnectionFactory db = /*...*/

CompletionStage<List<String>> names = db.withConnection(conn ->
    conn.statement("select name from users where age = :age")
        .arg("age", 30)
        .executeForSet() // CompletionStage<ResultSet>
        .thenApply(resultSet ->
                     .map(row -> row.getStr("name"))

API highlights

There are a couple of things I particularly like about the API. I described them below.

Reactive Streams interfaces

The API provides a set of methods returning Scala Futures or CompletionStages in Java variant, but these interfaces aren’t the only way to access data from the database. Rows returned by SQL queries can be streamed with back‑pressure from the database using Reactive Streams Publisher interface which enables easy interop with Akka Streams, Play framework and other technologies that support streaming.

Here’s a Play framework action implementation returning a stream of JSON documents as an HTTP chunked transfer encoding.


import akka.stream.scaladsl.{Sink, Source}
import org.reactivestreams.Publisher
import io.rdbc.sapi.SqlInterpolator._
import io.rdbc.sapi.{ConnectionFactory, Row}


val db: ConnectionFactory = /*...*/

def stream = Action.async { _ =>
  db.connection().map { conn =>
    val pub: Publisher[Row] = conn
      .statement(sql"SELECT i, t, v FROM rdbc_demo ORDER BY i, t, v")

    val source = Source.fromPublisher(pub).map { row =>
        Record(row.intOpt("i"), row.instantOpt("t"), row.strOpt("v"))
    }.alsoTo(Sink.onComplete(_ => conn.release()))


import akka.NotUsed;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
import akka.util.ByteString;
import org.reactivestreams.Publisher;
import io.rdbc.japi.ConnectionFactory;
import io.rdbc.japi.Row;


private final ConnectionFactory db = /*...*/

public CompletionStage<Result> stream() {
    return db.getConnection().thenApply(conn -> {
        Publisher<Row> pub = conn.statement("SELECT i, t, v FROM rdbc_demo ORDER BY i, t, v")

        Source<ByteString, NotUsed> recordSource = Source.fromPublisher(pub)
                .map(row -> Json.toJson(
                        new Record(
                .alsoTo(Sink.onComplete(ignore -> conn.release()));

        return Results.ok().chunked(recordSource).as(Http.MimeTypes.JSON);

String interpolation

The API features a string interpolator for Scala that allows to easily build the SQL statements without sacrificing safety against the SQL injection attacks. Statements can be created like this (note the sql prefix before the SQL statement):

import io.rdbc.sapi.SqlInterpolator._
import io.rdbc.sapi.Connection

def insertPerson(conn: Connection, person: Person): Future[Unit] = {
    conn.statement(sql"insert into people (name, age) values (${person.name}, ${person.age})")

In the above snippet, ${person.name} and ${person.age} will be replaced by prepared statement parameters and their values will be passed to the database without doing string concatenation (as in JDBC’s java.sql.PreparedStatement).

Unfortunately, that’s not possible to do in Java version. Below is an equivalent Java code.

import io.rdbc.japi.Connection;

public CompletionStage<Void> insertPerson(Connection conn, Person person) {
    return conn.statement("insert into people (name, age) values (:name, :age)")
                .arg("name", person.getName())
                .arg("age", person.getAge())

No nulls

Another thing that I like is that the API doesn’t force you to use null. You can’t get SQL NULL value from the database as JVM null. If you expect some column to hold NULL values you must explicitly call a method that returns a Scala’s Option or Java’s Optional:


val str: Option[String] = row.strOpt("column")


Optional<String> str = row.strOpt("column");

Asynchronous and non‑blocking PostgreSQL driver

Introducing a new API without any working implementations wouldn’t make much sense. I decided to implement a PostgreSQL driver for it. It’s a netty‑based library that has been tested to work with PostgreSQL 9.6.x and 10.x but in fact, it should work with any version since 7.4.

Check out complete examples that use real PostgreSQL database (including a Play framework app) in the rdbc‑examples repository. They are documented in the repository’s README.

Call for feedback

Before releasing the first stable version I need some feedback from the community, especially on the API design itself. I accept any constructive feedback: please message me on Twitter or Gitter, drop me an e‑mail or comment on this article.

Also, if you think that my effort makes sense please star my repos on GitHub, thanks!


  • a. Documentation for the Java version is not yet available.


  1. Slick contributors. Slick documentation, Database thread pool chapter
  2. Stephen Colebourne, Roger Riggs, Michael Nascimento Santos. JSR 310: Date and Time API, Section 2.5