Sol - An MQTT broker from scratch. Part 5 - Topic abstraction
posted on 2019 Mar 08
In the Part 4 we
explored some useful concepts and implemented two data structures on top of
those concepts.
The MQTT protocol defines an abstraction named topic, in essence, a string,
a label that is used to filter messages for each client. It follow an
hierarchical model, described by some simple rules:
a topic is an UTF-8 encoded string of max length of 65535 bytes
a forward / is used to separate different levels, much like directories in
a filesystem
# can be used as a wildcard for multilevel subscription, to subscribe to
multiple topics by following the hierarchy, e.g.: foo/bar/# will subscribe to:
foo/bar
foo/bar/baz
foo/bar/bat/yop
+ can be used for single level wildcard subscription, e.g: foo/+/baz will
subscribe to:
foo/bar/baz
foo/zod/baz
foo/nop/baz
They share some traits with message queues, but way simpler, lightweight and
less powerful.
Handling topic abstraction: the trie
We move now to the trie, the structure of choice to store topics. Trie is a
type of tree in which each node is a prefix for a key, the node position
define the keys and the associated values are set on the last node of each key.
They provide a big-O runtime complexity of O(m) on worst case, for insertion
and lookup, where m is the length of the key. The main advantage is the
possibility to query the tree by prefix, executing range scans in an easy way.
src/trie.h
Implementation of this data structure is a bit tricky and there are lot of
different approaches, the most simple one would involve the use of a fixed
length array on each node of the trie, with the complete alphabet size as
length.
The biggest advantage of the trie, beside the possibility of applying range
queries on the keyspace (this one will come handy for wildcard subscriptions
and management of topics), is in terms of average performances over hashtables
or B-Trees, it gives in fact on worst case O(L) insert, delete and search time
complexity where L is the length of the key. This comes at a cost, the main
drawback is that the structure itself, following this implementation is really
memory hungry. In the example case with an alphabet of size 96, starting from
the <space> character and ending with the ~ each node has 96 NULL pointer
to their children, this means that on a 64 bit arch machine with 8 bytes per
pointer we effectively have 768 bytes of allocated space per node. Let’s
briefly analyze a case:
insert key foo
insert key foot
So we have the root f which have 1 non-null pointer o, the children have
another 1 non-null pointer o, here lies our first value for key foo, and the
last children o have 1 non-null pointer for t, which will also store our second
value for foot key. So we have a total of 4 nodes, that means 4 * 96 = 384
pointers, of which only 4 are used. Now that’s a lot of wasted space! There’s
some techniques to mitigate this humongous amount of wasting bytes while
maintaining good time-complexity performances, called compressed trie and
adaptive trie.
Without going too deep into these concepts, best solutions so far seems three:
Use a single dynamic array (vector) in the Trie structure, each node must have
a pointer to that vector and an array char children_idx[ALPHABET_SIZE] which
store the index in the main vector for each children;
Use sized node based on the number of children and adapting lookup
algorithm accordingly e.g. with # children <= 4 use a fixed length array
of 4 pointers and linear search for children, growing up set fixed steps of
size and use a different mapping for characters on the array of children
pointers.
Replace the fixed length array on each node with a singly-linked linked
list, maintained sorted on each insertion, this way there’s an average
performance of O(n/2) on each search, equal to O(n), which is the best case
possible with the linked list data structure.
Luckily we’ve just written a linked list before (Perhaps I knew the answer?
:P) but also a vector could do well.
Let’s implement our trie with the third solution:
src/trie.c
Well, we have enough in our plate for now, our project should now have 3 more
modules: