Past couple of years have been all about the switch to NoSQL databases such as Mongo, Riak, Volt etc., and I see tremendous benefit in using these databases for the right job. However, regardless of relational or not most databases suffer from the same problem: bad database design.
When you see an application (regardless of whether it’s mobile or web based) that takes a long time to do something, the bottleneck is almost always data access layer and/or the database. When you look under the covers the design looks like this:
In this situation you get two types of answers based on the scale/mindset of the organization you are in:
- If you are in the enterprise: “we need to buy new hardware and upgrade our <insert RDBMS company name here> licenses” or “we should install a cluster”
- If you are in a smaller organization or start-up: “we need to replace the technology because the new database is X times faster since it has eventual consistency, no ACID, writes to the cloud, or any other magical advantage.”
The real issue is usually not the technology, most of the time it’s the application of the technology that the design team used. Before jumping the gun and getting into the gnarly details of how to migrate the technology, organizations need to look into how they can avoid the situation in the future.
So, what can you do to prevent “bad data model design”? Asking a single question has worked for me time and again. Next time you are in a project just try asking your “data team” the following question:
How are you testing your data model design?
Invariably, this will get into a discussion of the scenarios that the database is intended to support and with some effort you can bring the data team to think about “test driven data modeling”. Hopefully, since the rest of your team is already following the TDD best practices, at the end you will get one coherent design that is meant to solve the same problem rather than satisfying the engineers’ hunger for complex modeling.