Let's assume a system with data sources like wireless sensors, gateways and back-end servers. In order to ensure that there is no single point of failure, some level of redundancy is needed by duplicating gateways, connections and back-end systems.
|Fault-tolerant IoT architecture.|
Data exchange between gateways and back-ends can be realized with help of distributed database, without need for separate transfer mechanism. As described in my earlier posting, distributed databases can be characterized by whether they favor availability or consistency.
GaianDB is a dynamic distributed federated database provided by IBM. GaianDB advocates a flexible "store locally, query anywhere" (SLQA) paradigm. Data is stored in one database, and queries are propagated across the whole cluster to find the given data. This approach by itself does not guarantee high availability, but when combined with redundancy, it gives nice fault-tolerant system.
In the diagram above, each sensor is expected to be listened by two or more gateways, if no-fault condition. Each gateway has it's own database storing data received from sensors it is able to listen. This means there is redundant data recorded in the system. It is important to store or buffer data locally in gateways. In case of temporary connection failure the data is not lost but can be later retrieved from the gateway.
The cluster is dynamically self-organizing, which means it always look for optimal route in between nodes, if there exists any. If individual link or node is lost, data is routed other way round. With help of redundancy, at any given situation no single failure blocks the whole system from working.
Databases favoring consistency are not good fault-tolerant architectures. Typically such have one DB instance defined as master for any given data entity. Data is available via every secondary DB, but if the master DB is out of order, all secondary ones will cease providing the data, as they can not guarantee its consistency. RethinkDB is a popular example of such database.