WL#8132: JSON datatype and binary storage format

Status: Complete

The goal of this WL is to develop a binary format that allows the server to
efficiently store, retrieve and search JSON data.
F1: Provide an interface that can be used by the WL#7909 functions to
transform their Json_dom data structures into binary form.

F2: Provide an interface that can be used by the WL#7909 functions to
access the binary representation of a JSON document.

F3: Enhance CREATE TABLE and ALTER TABLE so that they can create JSON
columns. For example:

  CREATE TABLE T(json_col1 JSON);
  ALTER TABLE T ADD COLUMN json_col2 JSON;

F4: Extend Field to allow insert to and select from JSON typed fields. E.g:

  INSERT INTO T(json_col1) VALUES('[1]');
  SELECT * FROM T;
  // Should return '[1]'
On a high level, we will store the contents of the JSON document in
three sections:

1) A table of pointers to all the keys and values, in the order in
   which the keys and values are stored. Each pointer contains
   information about where the data associated with the key or the
   value is located, as well as type information about the key or
   value pointed to.

2) All the keys. The keys are sorted, so that lookup can use binary
   search to locate the key quickly.

3) All the values, in the same order as their corresponding keys.

If the document is an array, it has two sections only: the dictionary
and the values.

If the document is a scalar, it has a single section which contains
the scalar value.

In JSON objects, the keys are sorted on length, and keys with the same
length are sorted lexicographically (on the Unicode code points,
regardless of collation), in ascending order. (By sorting on length
first, it is possible to do binary search without even looking at the
actual contents of most of the keys.)

In JSON arrays, the values are stored in the index order.

Having a table of pointers to the keys and values at the beginning of
the binary representation, makes it possible to look up a member in
the middle of the document directly without scanning past all the
members that precede it. The entries in the table must have fixed
size, so that the position of a given entry can be calculated easily
(pos = offset + entry_num * size_of_entry).

Because we want to allow in-place updates of JSON values in the
future, and we want to do that without changing the format
specification, our format will store both the offset and the length of
variable-length values (strings, blobs, ...). This is redundant
information for now, because the length of a value can be calculated
as the difference between the offset of the value and the offset of
the value that follows it. However, by storing the length redundantly,
we can later go and change the value of a length field to make a value
shorter, without having to move all the values that come after it.
This would leave a hole between the values.

One can also imagine that one could proactively add holes between
values when serializing a JSON document, so that one can also grow
variable-length values without having to move other values to make
room for the new value. For now, though, the serialization will store
documents compactly with no holes in them.

To compensate for the extra space needed by the redundant length
information, we will make the format allow offset and length fields to
come in two different variants: 2 bytes for documents smaller than
64KB, and 4 bytes to support larger documents.


In JSON documents nested within other JSON documents, all offsets are
relative to the beginning of the nested document, so that the nested
document can be moved around freely without having to update the
offsets. Every nested document is self-contained, and is a valid
binary JSON document on its own.


--------

doc ::= type value

type ::=
0x00 |       // small JSON object
0x01 |       // large JSON object
0x02 |       // small JSON array
0x03 |       // large JSON array
0x04 |       // literal (true/false/null)
0x05 |       // int16
0x06 |       // uint16
0x07 |       // int32
0x08 |       // uint32
0x09 |       // int64
0x0a |       // uint64
0x0b |       // double
0x0c |       // utf8mb4 string
0x0f         // custom data (any MySQL data type)

value ::=
object  |
array   |
literal |
number  |
string  |
custom-data

object ::= element-count size key-entry* value-entry* key* value*

array ::= element-count size value-entry* value*

// number of members in object or number of elements in array
element-count ::=
         uint16 |  // if used in small JSON object/array
         uint32    // if used in large JSON object/array

// number of bytes in the binary representation of the object or array
size ::= uint16 |  // if used in small JSON object/array
         uint32    // if used in large JSON object/array

key-entry ::= key-offset key-length

key-offset ::= uint16 |  // if used in small JSON object
               uint32    // if used in large JSON object

key-length ::= uint16    // key length must be less than 64KB

value-entry ::= type offset-or-inlined-value

// This field holds either the offset to where the value is stored,
// or the value itself if it is small enough to be inlined (that is,
// if it is a JSON literal or a small enough [u]int).
offset-or-inlined-value ::=
uint16 |   // if used in small JSON object/array
uint32     // if used in large JSON object/array

key ::= utf8mb4-data

literal ::=
0x00 |   // JSON null literal
0x01 |   // JSON true literal
0x02 |   // JSON false literal

number ::=  ....  // usual format for [u]int(16|32|64), double

string ::= data-length utf8mb4-data

custom-data ::= custom-type data-length binary-data

custom-type ::= uint8   // type identifier that matches the
                        // internal enum_field_types enum

data-length ::= uint8*  // If the high bit of a byte is 1, the length
                        // field is continued in the next byte,
                        // otherwise it is the last byte of the length
                        // field. So we need 1 byte to represent
                        // lengths up to 127, 2 bytes to represent
                        // lengths up to 16383, and so on...

--------

LIMITATIONS:

The total size of a binary document must be less than 4GB, because
offset and length fields are limited to four bytes. This is also the
maximum size of a LONGBLOB column, so documents larger than this could
not be stored in the database even if the format allowed them.

Every key of a JSON object must be smaller than 64KB, because the key
entries have only two bytes for representing the key length.

--------

In order to be able to store JSON values in this binary format, a new
MySQL data type called JSON will be added. This involves:

- Making the parser recognize "JSON" as type name.

- Adding a subclass of Field that will be used when storing values of
  the new type in a table. This class will be called Field_json and
  extend the existing Field_blob class.

- Adding a new field type MYSQL_JSON_TYPE to the enum_field_types
  enumeration, to identify fields of the new type.

- Adding an interface that can be used to transform between the
  in-memory representation of a JSON document (the in-memory
  representation will be implemented in WL#7909) and the binary
  representation. That is, serialization and deserialization. This
  interface will be used by the functions added by WL#7909.

--------

The interface between this worklog and WL#7909 will be as follows:

1) Serialization:

WL#7909 adds a JSON DOM class hierarchy and a parser that builds a DOM
from a JSON text. This worklog will add a function that serializes the
DOM into its binary representation in a String object:

enum_serialization_result
json_binary::serialize(const Json_dom *dom, String *dest);

(The Json_dom class will be provided by WL#7909.)

The return value of json_binary::serialize() tells the status of the
serialization. It's an enum_serialization_result and can have one of
the following values:

  - OK: The JSON document was successfully serialized into the
    destination buffer.

  - VALUE_TOO_BIG: The JSON document was too big to be stored.

  - KEY_TOO_BIG: The JSON document contained a key that exceeded the
    maximum key length.

  - OOM: Out of memory, the serialization failed because it could not
    allocate more space.

  - FAILED: If some other, unexpected error occurred.

Code that serializes JSON values would typically look something like
this:

Json_dom *dom= ...;    // A Json_dom object that comes from a WL#7909 function
String str;            // A String that will hold the binary representation
switch (serialize(dom, &str))     // Perform serialization of dom into str
{
case OK:
  // Serialization went OK, now store the binary representation somewhere.
  store_data(str.ptr(), str.length());
  break;
case VALUE_TOO_BIG:
  // Couldn't serialize, raise appropriate error for the condition.
  my_error(...);
  break;
case KEY_TOO_BIG:
  // Couldn't serialize, raise appropriate error for the condition.
  my_error(...);
  break;
...
}

2) Deserialization:

This worklog will provide the following interface that can be used by
the WL#7909 functions to access the binary encoded JSON documents:

/*
  Return a handle to the top-level JSON value represented by the
  supplied binary data.
*/
Value json_binary::parse_binary(const char *data, size_t len);

class json_binary::Value
{
public:
  /* Does this value represent a valid document? */
  bool is_valid() const;

  /* What's the JSON type of this value? OBJECT, ARRAY, STRING, INT, etc. */
  enum_type type() const;

  /* Getters for various types. */
  const char *get_data() const;      // for STRING and OPAQUE
  size_t get_data_length() const;    // for STRING and OPAQUE
  int64 get_int64() const;           // for INT
  uint64 get_uint64() const;         // for UINT
  double get_double() const;         // for DOUBLE

  /* Size of array or object. */
  size_t element_count() const;

  /* Get the i'th element of an object or an array. */
  Value element(size_t i) const;

  /* Get the i'th key of an object. */
  Value key(size_t i) const;

  /* Look up the member with the given key in an object. */
  Value lookup(const char *key, size_t key_length) const;

  /*
    What's the MySQL type of this value? It only makes sense if this
    value is a JSON scalar that wraps an arbitrary MySQL value and
    type() returns OPAQUE.
  */
  enum_field_types field_type() const;

  /*
    Copy the binary JSON representation of this value into a buffer.
    Returns false on success, true on error.
  */
  bool raw_binary(String *buffer) const;
}

Callers will typically pass a pointer to the binary data to the
json_binary::parse_binary() function, which will return a handle to
the top-level document as a json_binary::Value object. The callers can
then use the json_binary::Value object to inspect the value at hand,
or use it to get other json_binary::Value objects that represent JSON
values nested within the top-level JSON value.

For example, to find a string on the path $[1].name in a JSON value,
one could use the interface like this:

String *str= read_data(...);     // Fetch the binary data from storage somehow.
Value doc= parse_binary(str.ptr(), str.length());
if (doc.type() == Value::ARRAY && doc.element_count() >= 2)
{
  Value v1= doc.element(1);
  if (v1.type() == Value::OBJECT)
  {
    Value v2= v1.lookup("name");
    if (v2.type() == Value::STRING)
    {
      // found the string ...
      std::string res= std::string(v2.get_data(), v2.get_data_length());
      // ... now do something with it ...
    }
  }
}

Or to collect all the double values found in an array:

std::vector vec;
String *str= read_data(...);     // Fetch the binary data from storage somehow.
Value doc= parse_binary(str.ptr(), str.length());
if (doc.type() == Value::ARRAY)
{
  for (int i= 0; i < doc.element_count(); i++)
  {
    Value v= doc.element(i);
    if (v.type() == Value::DOUBLE)
      vec.push_back(v.get_double());
  }
}
* Parser changes *

The new JSON type will be wired into the SQL parser by making the
"type" production rule in sql_yacc.yy recognize JSON as a type. Also,
to avoid breaking existing applications that use JSON as an
identifier, the new JSON keyword should be registered as a
non-reserved keyword.

The new type will be identified as MYSQL_TYPE_JSON, which field.cc
will map to Field_json.

* Field class changes *

The field class hierarchy will be extended with the class Field_json,
which will be a subclass of Field_blob. Field_json will have these
differences from Field_blob:

1. It will override store(char*, size_t, CHARSET_INFO*) and make it

- Parse all text strings as JSON text and convert to binary
  representation before storing them, as per the conversion rules
  defined by WL#7909.

- Reject all binary strings.

2. It will add a new function store_json() which takes a Json_dom and
converts it to binary representation before storing it. This function
will be used by the WL#7909 functions to store a Json_dom to a field.

* Serialization *

The serialization function (json_binary::serialize()) will take a
Json_dom object and transform it into a string of bytes. Json_dom is
the base class of a class hierarchy used for representing a JSON
document as a DOM-style tree structure in memory. The Json_dom class
hierarchy will be added in WL#7909.

json_binary::serialize() will take the object representing the
top-level document and process the DOM-tree recursively, and fill the
destination buffer as it descends down the tree.

The format allows for using either two bytes or four bytes for
representing offsets and lengths in JSON objects and JSON arrays,.
When serializing an object or and array, one doesn't know up front how
big the serialized representation will be, so one cannot tell whether
two bytes is sufficient to represent all the offsets and lengths. The
serializer will optimistically try to use two bytes from the start,
and when it finds that an object or array has grown so big that two
bytes is not enough, it will restart the serialization of that object
or array with four byte offsets/lengths.

* Deserialization *

Deserialization of a binary JSON value will be lazy, and will only
look at the parts of the document that the caller asks for.

To deserialize a binary JSON value, one will call the function
json_binary::parse_binary(), which takes a pointer to a char buffer
that holds the binary representation of a JSON document. The function
will not parse the entire document. It just reads the header that
tells the type of the document and returns a json_binary::Value object
that points to it.

The returned json_binary::Value object can be used to inspect values
nested inside the top-level document. The nested values will also be
returned as json_binary::Value objects, and they will also just be
shallow wrappers around the buffer that holds the binary
representation.