Showing posts with label denormalization. Show all posts
Showing posts with label denormalization. Show all posts

Thursday, March 29, 2018

Is NoSQL dead?

TL;DR -- "Reports of NoSQL's death are greatly exaggerated!..."

Every article introducing NoSQL usually starts by explaining that the term is a misnomer, as it really stands for "Not Only SQL", etc...  And back in 2014, some analysts predicted that "By 2017, the "NoSQL" label will cease to distinguish DBMSs, which will reduce its value and result in it falling out of use."  This was pleasing news for traditional DBMS vendors, and also "multi-model" vendors.

For sure, we've seen some convergence. RDBMS vendors all allow storage of JSON documents, and  MongoDB has recently announced support for multi-document transactions with ACID transactions.

But full convergence and the disappearance of NoSQL would not be such a good thing for users.  Incumbents might like it if the buzz about NoSQL levels off.  But it seems in the interest of NoSQL vendors to maintain a striking differentiator, while demonstrating their maturity as enterprise solutions.  The term "NoSQL" carries tremendous marketing power, and vendors would be foolish to stop leveraging that.

After that, the situation resembles the debate of best-of-breed versus integrated platforms, ranging from hi-fidelity sound systems to ERPs.  There will always be fervent proponents of each philosophical approach.  The only question is: do you want the right tool for the job?  For companies that have adopted NoSQL, few today use just a single database technology.  They may use one platform for operational big data, another one for search, yet another for caching, and one more to power their recommendation engine.

Enterprises are embracing more and more a variety of best-of-breed NoSQL solutions to solve their specific challenges.  They want proper data governance for their unstructured and semi-structured data, particularly in the context of GDPR and privacy concerns.  They need a single tool to perform the data modeling of the top NoSQL vendors with a powerful and user-friendly interface.  Hackolade provides just that:
- document-oriented: MongoDB, Couchbase, Cosmos DB, Elasticsearch, Firebase, Firestore
- key-value: DynamoDB; with Redis coming at a later date
- column-oriented: HBase, Cassandra
- graphs: we're actively developing a new version to support property graph databases, starting with Neo4j, and RDF triples
- RBDMS with JSON: we also plan support for JSON modeling in Oracle, MySQL, MS SQL Server, and PostgreSQL
- JSON and APIs: there's high demand for us to apply our data modeling to GraphQL, Swagger 2, OpenAPI 3, and LoopBack.

NoSQL is dead, long live NoSQL!

Current Hackolade DB targets


Tuesday, January 30, 2018

Schema validation for a schemaless database: is it a contradiction?


MongoDB recently introduced, with its version 3.6, a validation capability using JSON Schema syntax.  As we keep hearing that one of the great benefits of NoSQL is the absence of schema, isn’t this new feature an admission of the limitations of NoSQL databases?  The answer is a resounding NO: schema validation actually brings the best of both worlds to NoSQL databases!

Previously with version 3.2, MongoDB had introduced a validation capability, using their Aggregation Framework syntax.  This was in response to the request of enterprises wishing to leverage the benefits of NoSQL, without risk of losing control of their data.  JSON Schema is the schema definition standard for JSON files, sort of the equivalent of XSD for XML files.  So, it was only natural that MongoDB would adopt the JSON Schema standard.  There are multiple reasons to leverage this capability: 

1)     Enforcing schema only when it matters: with JSON Schema, you can declare fields where you want enforcement to take place.  And let other fields be added with no enforcement at all, by using the property: ‘AdditionalProperties’.  Some fields are more important than others in a document.  In particular in the context of privacy laws and GDPR, you may want to track some aspects of your schema and ensure consistency.  You may also want to control data quality with field constraints such as string length or regular expression, numeric upper and lower limits, etc…

2)     JSON polymorphism: having a schema declared and enforced does not at all limit you in your ability to have multi-type fields or flexible polymorphic structures.  It only makes sure that they do not occur as a result of development mistakes.  JSON Schema, with oneOf/anyOf/allOf/noneOf choices, lets you declare in your validation rules exactly what is allowed and what is not allowed.

3)     Degree of enforcement: MongoDB lets you decide, for each collection, the validation level (off, strict or moderate), and the validation action to be returned by the database through the driver (warning or error.) 

In effect, the $jsonschema validator becomes the equivalent of a DDL (data definition language) for NoSQL databases, letting you apply just the right level of control to your database.


Hackolade model dynamically generates MongoDB $jsonschema validator
Since Hackolade was built from the ground up on JSON Schema, it has been quite easy to maintain MongoDB certification as a result of this v3.6 enhancement.  No JSON Schema knowledge is required!  You build your collection model with a few mouse clicks, and Hackolade dynamically generates the JSON Schema script for creation or update of the collection validator.

Wednesday, April 19, 2017

The Tao of NoSQL Data Modeling

The idea for Hackolade came from my own personal need for a data modeling tool for NoSQL databases.  I searched the web, and couldn’t find one that would satisfy my needs.   I tried really hard to use existing tools!  After all, all I wanted was to give my credit card number and download the right tool to do my job.  The last thing on my mind was to embark on a new entrepreneurial adventure...


There is a short explanation for why I was not satisfied with the existing tools, and there's also a long answer below.  The short answer is simple and holds (almost) in this one picture:

data modeling, yelp challenge dataset, ERD
Reverse-engineering of Yelp Challenge dataset using traditional ERD tool

Periodically, Yelp awards prize money for interesting insights out of the analysis of their sample dataset.  In the past, it has led to hundreds of academic papers.  As the data is provided in JSON format, any NoSQL document database is a good candidate to store the data, and several blogs explain how to use MongoDB for the analysis.  Using a data modeling tool to discover the data structure should be a great first step...

Only problem is: the Yelp dataset is made of just five data collections in MongoDB, yet the traditional ER tools finish their reverse-engineering process by showing these stats:

If there are just five collections in the database, you would expect only five entities in the Entity Relationship diagram, one for each of the collections in MongoDB, right?  Something more like this:
data modeling, yelp challenge dataset, ERD
Reverse-engineering of Yelp Challenge dataset using Hackolade
Besides the more orderly aspect, this second diagram is also a lot easier to understand.  It is a closer representation of the physical storage, displaying nested JSON sub-objects as indentations rather than as separate boxes (entities) in the ERD -- in a manner similar to what you would find in a JSON document. 

And if you're developing or maintaining your own model, it is a lot easier to deal with the entire JSON structure in just one view, including all nested objects (arrays and sub-documents), than if you need to open a new entity for each nested object (like in the following picture representing the structure of just one of the Yelp documents...)
Yelp Business collection represented by a traditional ER tool

No wonder some developers of NoSQL applications don't want to hear about data modeling, when the diagram that is supposed to help understand and structure things, is actually more confusing, and doesn't look anywhere close to the physical documents being committed to the database!  A more natural view would be this one:

Yelp Businesses collection represented by Hackolade
To manage objects metadata, Hackolade provides a second view -- a hierarchical tree view -- similar to the familiar XSD tree:
Hierarchical tree view in Hackolade

One of the great benefits of this tree view is the handling of the polymorphic nature of JSON, letting the user define choices between different structures.

The reason for the difficulty with traditional ER tools in representing JSON nested structures is actually simple and logical: they were originally designed for relational databases, and their own persistence data model (how they store objects and metadata) is itself relational.

As a user, if you use a traditional ER diagramming tool for the data modeling of relational databases and apply it to a NoSQL database (MongoDB in this case), you are constrained by the original purpose and underlying data model of the tool itself.  And while it is quite creative of the vendor to make its tool "compatible" with MongoDB, it is clearly an afterthought, and it ends up not being very useful.

Just like NoSQL databases are built differently than relational databases, data modeling tools for NoSQL databases need to be engineered from the ground up to leverage the power and flexibility of JSON, with its ability to support nested semi-structured polymorphic data.  And to do that, the modeling tool cannot store its own data in flat relational tables!

Hackolade stores data model metadata in JSON (actually in JSON Schema, the JSON equivalent of XSD for XML), making it easy to represent JSON structures in a hierarchical manner that is close to the physical storage of the data.  And the user interface was built according to the specific nature and power of JSON.  This is why Hackolade is the pioneer for the data modeling of NoSQL and multi-model databases!

Longer answer

The challenges in modeling JSON with tools made for flat database structures are as follows:
  • similarity between JSON and its GUI representation
    • structure
    • sequence
    • indentation
  • clarity of complex models
  • meaning of relationship lines
  • representation of polymorphism

Structure

Contrary to conceptual modeling, JSON is a representation of the physical storage in the database as implemented, or intended to be implemented, in a NoSQL database (or multi-model DBMS.)  Entity Relationship modeling theory has worked wonders for the normalization of relational databases, in its ability to represent in diagrams: conceptual, logical, and physical models.  But ER theory has to be stretched for the purpose of NoSQL because of the power and flexibility provided by embedding,  denormalization, and polymorphism.

If the ERD is going to represent conceptual entities, then each embedded objet in a JSON document could (maybe simetimes) be represented by 1 box in the ERD.  However, we’re dealing here with physical storage, and therefore in such case, it is preferable to have:


1 JSON document = 1 entity = 1 box
in the ERD

That way, the contextual unity of the document can be preserved.


Sequence

Preserving in the ERD the sequence of the physical document helps legibility and understanding. 


As a consequence of splitting embedded objects from the main document, the ERD drawn with traditional tools makes things harder for the observer by not displaying the same sequence of fields in the diagram as in the physical JSON.



On the other hand, Hackolade's views (ERD and the hierarchical tree) both respect the physical sequence of the document:

Indentation

Indentation of embedded objects in JSON (arrays and sub-documents) helps legibility.  As another consequence of splitting embedded objects from the main document, the ERD drawn with traditional tools does not preserve the indentation of JSON that would make it easy to read.

Clarity of complex models

Take a look at an example of the structure of a real document from a real customer (with some field names obfuscated on purpose...)
Complex JSON document

The ER rendering of such a document by a traditional ER tool would result in so many boxes that it becomes nearly impossible to work with.  And that’s with a single document.  Imagine what an ERD would look like for an application comprised of dozens of such collections.

Meaning of relationship lines

As yet another consequence of splitting embedded objects from the main document, the ERD drawn with traditional tools displays relationship lines of different nature:
  • Relationships resulting from the embedding of objects
  • Traditional foreign key relationships [even though we are dealing with so-called ‘non-relational’ DBs, there are often implicit relationships in NoSQL data]
This makes for a confusing picture as true foreign key relationships are hard to distinguish from embedding relationships (even though there can be dashed and solid lines.)

All this does not leave much room for a useful 3rd type of relationships: those issued from denormalization (i.e.; redundancy of data which is useful in NoSQL to improve the read performance of the database.)

Polymorphism

One of the great features of JSON as applicable to NoSQL and Big Data, is the ability to deal with evolving and flexible schemas, both at the level of the general document structure, and at the level of the type of a single field.  This is known as "schema combination", and can be represented in JSON Schema with the use of subschemas and the keywords: anyOf, allOf, oneOf, not.

Let’s take the example of a field that evolves from being just a string type to becoming a sub-document, or with the co-existence of both field types.  Traditional ER tools have a hard time dealing graphically with subschemas (let's be frank, they're simply unable to deal with it...), whereas with Hackolade:
Polymorphism in 2 Hackolade views


Conclusion

Besides the above demonstration, Hackolade has many other advantages.  For example, reverse-engineering is done through a truly native access to the NoSQL database, not via a "native" 3rd-party connector (is that not a contradiction in terms?...)  Hackolade provides useful developer aids such as the ability to generate sample documents and forward-engineering scripts specific to each supported NoSQL database vendors.  And Hackolade supports other NoSQL vendors than just MongoDB: DynamoDB, Couchbase, Cosmos DB, Elasticsearch, Apache HBase, Cassandra, Google Firebase and Firestore, with many others coming up.

Data is a corporate asset, and insights on the data is even more strategic.  Sometimes overlooked as a best practice, data modeling is critical to understanding data, its interrelationships, and its rules. 

Hackolade lets you harness the power and flexibility of dynamic schemas.  It provides a map for applications, a way to engage the conversation between project stakeholders around a picture.  Proper data modeling collaboration between analysts, architects, designers, developers, and DBAs will increase data agility, help get to market faster, increase quality, lower costs, and lower risks.