SQL RDBMS Access with ScalikeJDBC
- From the sandbox
- Tutorial
There is a library that facilitates the use of SQL in Scala programs, which I did not find mention of on the hub I would like to correct this injustice. It's about ScalikeJDBC.
The main competitor of SkalikeJDBC is Anorm - a library from Play that solves exactly the same tasks of convenient communication with RDBMSs using pure (without ORM impurities) SQL. However, Anorm is deeply mired in Play, and using it in non-Play projects can be difficult. I did not begin to wait for it to prove difficult for me. Having heard about SkalikeJDBC, I almost immediately decided to try it. The results of this testing in the form of a small demo application I will share in this article, just below.
Before moving on to the example of using the library, it is worth noting that work with the following DBMSs is supported and tested:
And the rest (Oracle, MS SQL Server, DB2, Informix, SQLite, thousands of them ) should also work, because all communication with the DBMS goes through standard JDBC. However, their testing does not work, which may lead to despondency on the corporate customer.
However, we will leave the corporate customer alone with his gloomy thoughts, and it’s better to do what this article was written for. Take a short dive into the library.
I will give an example of a simple application that uses SkalikeJDBC to access Postgresql. I will show how you can configure it using Typesafe Config , create a table in the database, make CRUD queries to this table, and convert the results of Read requests to Scala objects. I will intentionally miss out on many configuration options (without using Typesafe Config) and using the library to stay brief and provide a quick start. A full description of the features is available in a convenient and fairly short documentation , as well as in the Wiki on github.
The application will use SBT to build and manage dependencies, so we create the build.sbt file in the root of an empty project with the following contents:
It declares the following dependencies:
We will use the local Postgresql on the standard (5432) port as the DBMS. It already has a pguser user with a securepassword password and full access to the demo_db database.
In this case, create the src / main / resources / application.conf configuration file with the following contents:
We could restrict ourselves to the first four parameters, then the default connection pool settings would apply.
Next, create a demo package in the src / main / scala folder, where we will place all the scala code.
Let's start with the main startup object:
The only line inside the object is the instruction to read the access settings for the demo_db database from the configuration files. The DBs object will search for all suitable configuration keys (driver, url, user, password, ...) in the db.demo_db node in all configuration files read by Typesafe Config. Typesafe Config, by convention, automatically reads application.conf located in the classpath of the application.
The result will be a configured ConnectionPool to the database.
Next, create a trait in which we encapsulate the connection to the database from the pool
In (1) we get the connection (java.sql.Connection) from the pool created and configured in the last step.
In (2), we wrap the resulting connection in a convenient database access object for scalikeJDBC.
In (3) and (4), we create wrappers that are convenient for us for executing SQL queries. (3) for change requests, (4) for read requests. We could have done without them, but then we would have to write everywhere:
instead:
, a DRY has not been canceled yet.
Let’s take a closer look at what happens in items (3) and (4):
using (dbFromPool) - allows you to wrap the opening and closing of the database connection in one request. Without this, you would need to open (val db = ThreadLocalDB.create (connectionFromPool)) and remember to close (db.close ()) the connections yourself.
db.localTx - Creates a blocking transaction within which queries are executed. If an exception occurs inside the block, the transaction will be rolled back. More details.
db.readOnly - Executes requests in read mode. More details.
We can use this trait in our DAO classes, of which there will be exactly 1 in our training application.
Before we start creating our DAO class, we will create a domain object with which it will work. This will be a simple case class that defines a system user with three talking fields:
Only the name field is required. If id == None, then this indicates that the object has not yet been saved to the database.
Now everything is ready to create our DAO object.
Here it’s easy to guess what each function does.
An SQL object is created using notations:
This object uses the following methods:
Ending the chain is the apply () operation, which executes the created request through the declared implicit session.
It should also be noted that all parameter inserts like $ {userId} are parameter inserts in PreparedStatement and there is no reason to be afraid of any SQL injections.
Well, our DAO object is ready. It is strange, of course, to see in it a method of creating a table ... It was added just as an example. Training application - we can afford it. It remains only to apply this DAO object. To do this, we will modify the DemoApp object that we created at the beginning. For example, it can take this form:
In this brief overview, we looked at the capabilities of the SkalikeJDBC library and felt the lightness and power with which it allows you to create objects for accessing relational data. I am glad that in the era of dominance of ORMs, there is such a tool that solves the tasks assigned to it well and at the same time continues to develop actively.
Thank you for attention. May Scala come with you!
The main competitor of SkalikeJDBC is Anorm - a library from Play that solves exactly the same tasks of convenient communication with RDBMSs using pure (without ORM impurities) SQL. However, Anorm is deeply mired in Play, and using it in non-Play projects can be difficult. I did not begin to wait for it to prove difficult for me. Having heard about SkalikeJDBC, I almost immediately decided to try it. The results of this testing in the form of a small demo application I will share in this article, just below.
Before moving on to the example of using the library, it is worth noting that work with the following DBMSs is supported and tested:
- PostgreSQL
- MySQL
- H2 Database Engine
- HSQLDB
And the rest (Oracle, MS SQL Server, DB2, Informix, SQLite, thousands of them ) should also work, because all communication with the DBMS goes through standard JDBC. However, their testing does not work, which may lead to despondency on the corporate customer.
Application example
However, we will leave the corporate customer alone with his gloomy thoughts, and it’s better to do what this article was written for. Take a short dive into the library.
I will give an example of a simple application that uses SkalikeJDBC to access Postgresql. I will show how you can configure it using Typesafe Config , create a table in the database, make CRUD queries to this table, and convert the results of Read requests to Scala objects. I will intentionally miss out on many configuration options (without using Typesafe Config) and using the library to stay brief and provide a quick start. A full description of the features is available in a convenient and fairly short documentation , as well as in the Wiki on github.
The application will use SBT to build and manage dependencies, so we create the build.sbt file in the root of an empty project with the following contents:
name := "scalike-demo"
version := "0.0"
scalaVersion := "2.11.6"
val scalikejdbcV = "2.2.5"
libraryDependencies ++= Seq(
"org.postgresql" % "postgresql" % "9.4-1201-jdbc41",
"org.scalikejdbc" %% "scalikejdbc" % scalikejdbcV,
"org.scalikejdbc" %% "scalikejdbc-config" % scalikejdbcV
)
It declares the following dependencies:
- postgresql - jdbc postgres driver
- scalikejdbc - the actual SkalikeJDBC library
- scalikejdbc-config - Typesafe Config support module for configuring a connection to a DBMS
We will use the local Postgresql on the standard (5432) port as the DBMS. It already has a pguser user with a securepassword password and full access to the demo_db database.
In this case, create the src / main / resources / application.conf configuration file with the following contents:
db {
demo_db {
driver = org.postgresql.Driver
url = "jdbc:postgresql://localhost:5432/demo_db"
user = pguser
password = securepassword
poolInitialSize=10
poolMaxSize=20
connectionTimeoutMillis=1000
poolValidationQuery="select 1 as one"
poolFactoryName="commons-dbcp"
}
}
We could restrict ourselves to the first four parameters, then the default connection pool settings would apply.
Next, create a demo package in the src / main / scala folder, where we will place all the scala code.
DemoApp.scala
Let's start with the main startup object:
package demo
import scalikejdbc.config.DBs
object DemoApp extends App {
DBs.setup('demo_db)
}
The only line inside the object is the instruction to read the access settings for the demo_db database from the configuration files. The DBs object will search for all suitable configuration keys (driver, url, user, password, ...) in the db.demo_db node in all configuration files read by Typesafe Config. Typesafe Config, by convention, automatically reads application.conf located in the classpath of the application.
The result will be a configured ConnectionPool to the database.
Dbconnected.scala
Next, create a trait in which we encapsulate the connection to the database from the pool
package demo
import java.sql.Connection
import scalikejdbc.{ConnectionPool, DB, DBSession}
import scalikejdbc._
trait DbConnected {
def connectionFromPool : Connection = ConnectionPool.borrow('demo_db) // (1)
def dbFromPool : DB = DB(connectionFromPool) // (2)
def insideLocalTx[A](sqlRequest: DBSession => A): A = { // (3)
using(dbFromPool) { db =>
db localTx { session =>
sqlRequest(session)
}
}
}
def insideReadOnly[A](sqlRequest: DBSession => A): A = { // (4)
using(dbFromPool) { db =>
db readOnly { session =>
sqlRequest(session)
}
}
}
}
In (1) we get the connection (java.sql.Connection) from the pool created and configured in the last step.
In (2), we wrap the resulting connection in a convenient database access object for scalikeJDBC.
In (3) and (4), we create wrappers that are convenient for us for executing SQL queries. (3) for change requests, (4) for read requests. We could have done without them, but then we would have to write everywhere:
def delete(userId: Long) = {
using(dbFromPool) { db =>
db localTx { implicit session =>
sql"DELETE FROM t_users WHERE id = ${userId}".execute().apply()
}
}
}
instead:
def delete(userId: Long) = {
insideLocalTx { implicit session =>
sql"DELETE FROM t_users WHERE id = ${userId}".execute().apply()
}
}
, a DRY has not been canceled yet.
Let’s take a closer look at what happens in items (3) and (4):
using (dbFromPool) - allows you to wrap the opening and closing of the database connection in one request. Without this, you would need to open (val db = ThreadLocalDB.create (connectionFromPool)) and remember to close (db.close ()) the connections yourself.
db.localTx - Creates a blocking transaction within which queries are executed. If an exception occurs inside the block, the transaction will be rolled back. More details.
db.readOnly - Executes requests in read mode. More details.
We can use this trait in our DAO classes, of which there will be exactly 1 in our training application.
User.scala
Before we start creating our DAO class, we will create a domain object with which it will work. This will be a simple case class that defines a system user with three talking fields:
package demo
case class User(id: Option[Long] = None,
name: String,
email: Option[String] = None,
age: Option[Int] = None)
Only the name field is required. If id == None, then this indicates that the object has not yet been saved to the database.
UserDao.scala
Now everything is ready to create our DAO object.
package demo
import scalikejdbc._
class UserDao extends DbConnected {
def createTable() : Unit = {
insideLocalTx { implicit session =>
sql"""CREATE TABLE t_users (
id BIGSERIAL NOT NULL PRIMARY KEY ,
name VARCHAR(255) NOT NULL ,
email VARCHAR(255),
age INT)""".execute().apply()
}
}
def create(userToSave: User): Long = {
insideLocalTx { implicit session =>
val userId: Long =
sql"""INSERT INTO t_users (name, email, age)
VALUES (${userToSave.name}, ${userToSave.email}, ${userToSave.age})"""
.updateAndReturnGeneratedKey().apply()
userId
}
}
def read(userId: Long) : Option[User] = {
insideReadOnly { implicit session =>
sql"SELECT * FROM t_users WHERE id = ${userId}".map(rs =>
User(rs.longOpt("id"),
rs.string("name"),
rs.stringOpt("email"),
rs.intOpt("age")))
.single.apply()
}
}
def readAll() : List[User] = {
insideReadOnly { implicit session =>
sql"SELECT * FROM t_users".map(rs =>
User(rs.longOpt("id"),
rs.string("name"),
rs.stringOpt("email"),
rs.intOpt("age")))
.list.apply()
}
}
def update(userToUpdate: User) : Unit = {
insideLocalTx { implicit session =>
sql"""UPDATE t_users SET
name=${userToUpdate.name},
email=${userToUpdate.email},
age=${userToUpdate.age}
WHERE id = ${userToUpdate.id}
""".execute().apply()
}
}
def delete(userId: Long) :Unit= {
insideLocalTx { implicit session =>
sql"DELETE FROM t_users WHERE id = ${userId}".execute().apply()
}
}
}
Here it’s easy to guess what each function does.
An SQL object is created using notations:
sql""""""
sql""
This object uses the following methods:
- execute - to execute without returning a result
- map - to convert the received data from the set of WrappedResultSets to the view we need. In our case, the collection of User'ov. After conversion, you must specify the expected number of return values:
- single - to return one row of the result in the form of Option.
- list - to return the entire resulting collection.
- UpdateAndReturnGeneratedKey - to insert and return the identifier of the created object.
Ending the chain is the apply () operation, which executes the created request through the declared implicit session.
It should also be noted that all parameter inserts like $ {userId} are parameter inserts in PreparedStatement and there is no reason to be afraid of any SQL injections.
Finita
Well, our DAO object is ready. It is strange, of course, to see in it a method of creating a table ... It was added just as an example. Training application - we can afford it. It remains only to apply this DAO object. To do this, we will modify the DemoApp object that we created at the beginning. For example, it can take this form:
package demo
import scalikejdbc.config.DBs
object DemoApp extends App {
DBs.setup('demo_db)
val userDao = new UserDao
userDao.createTable()
val userId = userDao.create(User(name = "Vasya", age = Some(42)))
val user = userDao.read(userId).get
val fullUser = user.copy(email = Some("vasya@domain.org"), age = None)
userDao.update(fullUser)
val userToDeleteId = userDao.create(User(name = "Petr"))
userDao.delete(userToDeleteId)
userDao.readAll().foreach(println)
}
Conclusion
In this brief overview, we looked at the capabilities of the SkalikeJDBC library and felt the lightness and power with which it allows you to create objects for accessing relational data. I am glad that in the era of dominance of ORMs, there is such a tool that solves the tasks assigned to it well and at the same time continues to develop actively.
Thank you for attention. May Scala come with you!