Things that suck


It would be unfair to tout all the benefits of DBLX if we did not also point out the rough spots.
So the following details are the software equivalent of DBLX dropping its pants and showing you its ugly private parts.

Incomplete SQL support


This topic is the first one on the list for a reason. DBLX is a database written by anywhere from 1 to 3 people at a time. We do this in our 'spare time'. We have come a long way since our beginnings in 2011 and probably cover more than 85% of all SQL statements that DBLX will ultimately have. But we are not 100% yet.
Here is where DBLX is still a little light:

All of these things are coming soon, and many will be in our next release.

Memory


Database tables are held in memory for the best performance. The data is written to a storage persistence format, but the database work that results from SELECT, INSERT, UPDATE, and DELETE operations is all done in memory. With DBLX it is possible to create hundreds of tables with tens-of-thousands of rows in each, and DBLX will try to hold it all in memory. There are limitations to what can be done, and if needed DBLX can be configured to run in large-memory and/or 64-bit environments.
If you have memory issues with DBLX, please send us an email and we will try to help you any way we can.

No Relational Indexes


We don't have indexes in DBLX in the same way that other relational databases do. From the outside, it is assumed that without an indexing feature the database cannot be high-performing. DBLX uses an XML format for its persisted table data. Through the technology used to parse our XML format, we have managed to achieve a hierarchical index which expedites searches. We are constantly revisiting our decision and improving our performance in the process, but we are not really done with indexes just yet.

Brute-force SQL parser


The DBLX SQL parser is quite solid and can handle a very large swath of variation, but the string parsing routines are not a work of art from a software perspective. Our SQL parser is neither brittle or difficult to maintain, and it has proven simple to extend.
The problem with our parser is that we tried to make it as fast as possible. To make it fast we just kept trying to find every little efficiency we could wring from it. When we were done the code was fast but had radically changed form from our clean-room beginnings. If this was a revenue-generating product that was sold to users, we would not have wanted a parser that was fast at the expense of looking very odd.

All the expensive things


There are a few things we had to implement which were required but were grossly low performing. Of the worst of the bunch was string encryption. DBLX encrypts communications with its clients and encrypts its data. The cost in time to encrypt a string is very high, especially for small strings. The best we could improve here was simply to minimize when and where we perform encryption. We can't do much else with the encryption code, so we will hold off on changes until the next major release.
Implementing the datetime datatype turned out to be much more complicated then we anticipated. Everything we started with was very slow compared to all the other data types. There were a few places where we found more efficient methods of recognizing datetime values from Strings, but we were left with just trying to construct a Date class from a String. In the profiler, going from a String to a Date showed up in the top 5 biggest code pigs.
We probably can't do much with the String to Date performance, so we will also hold off on any changes untl a later release.
Joins were also a bit of a performance issue, and actually turned out to be 180-degrees from what we had planned. We figured the JOIN code would be hard to implement and really big, but it was actually very small and simple to create. The problem with JOINs was that the data had to be queried first, JOINed second, and then you had to filter on the WHERE criteria. After you join multiple data sets you have essentially created one big table with all the columns from all the tables being JOINed. To match the WHERE critera you have to walk down many columns for each row in the JOINed set, and this is significantly slower than the performance on rows from one table. We did find some efficiencies with JOINs and while we think there is more performance to be had, JOINs in DBLX are as fast as any other relational database.

Contrarian Design Goals


When we started on the project we had a set of Prime Directives. The Directives stated that we had to make DBLX small and fast.
In current times, small and fast applications are a rarity, and specifically with Java take experience to create.
When we brought other developers onto the project we had this communication problem where they could hear the goals but did not know what to do differently to achieve them. This issue was worst for the Design Bigots. The Design Bigots were fixated on the concept that a design was the right one and that a design would ensure small or fast applications.
Over time it became clear that we needed a way to articulate the Prime Directive to other developers. As we tried to write down our approach to the Prime Directive, it started to look more like a set of coding guidelines. But the guidelines worked, and they could be read and understood by other programmers including the Design Bigots. Its safe to say that many did not like what they read, but at least they got the concepts.
We took our document with the coding guidelines and tried to put a title to it. Since many of the guidelines were about how to reduce code it became clear that our approach was to reduce the amount of code to the bare minimum required to solve the problem. From that we gave the guidelines the official name Reductive Development Model.
If you want to know how well it works, just remember that the DBLX server is just under 500Kb.