In the past few months we have worked relentlessly on the development of the AllianceBlock Data Tunnel. After the announcement of the partnership with Ocean Protocol we worked on a prototype that got vetted by Ocean which followed in a grant from the Ocean Shipyard program.
Last week we completed the Proof of Concept version of the Data Tunnel. While this is not a release-ready version, it’s a major technological milestone for the Data Tunnel. As was outlined in the Data Tunnel roadmap, the MVP (minimum viable product) is scheduled to be released by the end of March this year and will be available as a production ready service.
The Data Tunnel has a couple of key concepts for its foundation:
- It’s easy to publish any type of data, like CSV, XML or JSON
- It’s easy to consume any type of data either manually or programmatically
- Data from the Data Tunnel is predictable, and you can query data dynamically
- The Data Tunnel is scalable and allows decentralized access control
With the Proof of Concept we have managed to create a platform that is able to accept any of the major data formats that are stored in an extremely scalable way through a serverless infrastructure in one default format that is easy to store, update and query.
Along with Ocean, we are well on route to keep fulfilling our vision to connect traditional financial (TradFi) and decentralized finance (DeFi). The Data Tunnel will be the first of a number of products which will simplify compliance with financial regulations.
It’s easy to publish any type of data, like CSV, XML or JSON
CSV and XML belong to one of the most wide used and adopted dataset formats. Data in both formats is easy to generate, easy enough to read by humans and pretty straight forward to consume by applications. JSON is a format that is widely adopted by APIs, it’s easy to implement in applications and lightweight to consume.
In order to support a wide array of datasets, we have opted to support these most used formats as input data for the data tunnel. Each dataset submitted to the Data Tunnel is immediately stored in a format that is scalable, easy to store and easy to query so that data consumers will have fast access to large datasets, no matter what the original input data was. Later on, each dataset updates are tracked, so that an audit trail exists with regards to the evolution of the data.
It’s easy to consume any type of data either manually or programmatically
In order for datasets to be useful, they’ll need to be accessed easily. All output data consumed from the Data Tunnel is returned in JSON, which is easy to consume in itself. In order to query the data tunnel to have a sub selection of JSON data returned, you will be able to use our new proprietary query language: ABQL.
The AllianceBlock Query Language
ABQL is the proprietary query language developed for all datasets available on the AllianceBlock Data Tunnel. The key concepts of ABQL are:
- Fully compliant with the JSON standard
- The possibility to generate dynamic queries without human intervention
- Easy to learn and integrate into existing applications
- Inherent support for nested objects
An ABQL tutorial, developer documentation and an environment to try out ABQL wil be available around the time of the MVP. Auto-generated and pre-structured queries with preselected variables will be available by default non-developers for easy viewing of data. ABQL is also the basis for the AllianceBlock Data Tunnel UI for quick analysis of the data.
Data from the Data Tunnel is predictable, and you can query data dynamically
Each time data is queried from the Data Tunnel, a schema of the data (also in JSON format) is returned together with the data. This schema explains the data structure for the data consumer (e.g. application developers). With the schema, all data output can be predicted, enabling developers to create dynamic layers that support all datasets published through the AllianceBlock data tunnel. Furthermore, combined with ABQL, developers are able to create dynamic queries that are generated on the fly so that all data can be analyzed without human interference.
Furthermore, queries that alter the output of the data (aggregations, different column names, etc.) will come with an automatically generated schema that fits the output of that query, so that that output can be automatically queried once more using ABQL.
Ultimately, ABQL combined with the generated JSON data schemes, will help drive adoption of decentralized access to datasets.
The output data is separated into two parts: the schema that describes the data, and the data itself. Here we can see the aggregated amount of LP tokens staked in a pool (done through ABQL). ABQL is great for blockchain analysis too!
The Data Tunnel is scalable and allows decentralized access control
Using cutting edge serverless AWS services that are great for large amounts of data with low latency and high frequency access, data on the Data Tunnel is scalable. It doesn’t matter whether a dataset is 200KB or 200GB, access to its data is always quick and easy to query using ABQL and the JSON schema.
Leveraging Ocean Protocol, the publisher of data remains in control of their data. Through Data Access Tokens, access can be purchased and traded amongst consumers.
Trustless & Reusable Identity Verifications
We have also made advancements on the trustless & reusable identity verification service. This service allows service providers to reuse anonymized identity verifications. Users will not have to go through a KYC verification process each time and share their most sensitive data to different service providers. Service providers on the other hand will not have to pay for each verification each time and will not have to implement time consuming infrastructure in order to be GDPR compliant.
Reusable identity verifications are anonymized, but contain a signature that can be used to determine a trusted identity verification provider has indeed really verified this user. This will prevent fraud, without having to know who a service provider is really dealing with.
This service is still in early stages, we expect to have development of this service production ready together with the Data Tunnel MVP.
Benefiting ALBT holders
ALBT is integral to AllianceBlock protocol. The solution described above is only possible with the use of the ALBT token.
The AllianceBlock ecosystem benefits both data publishers and data consumers. Users are incentivized to share it as they can monetize their data, while also remaining in control of access at all times. There are a number of actions that necessitate the payment or receiving of ALBT, including:
- Data query: Each request for data is paid in ALBT tokens
- Dataset generation: A dataset generation is a more complex data query and commands a (normally higher) fee payable in ALBT token
- Data collection: Collection of data from integrated streams is incentivized with ALBT
- Storage: Storing files on IPFS is paid in ALBT
- Processing: Processing data across ecosystem and to integrated systems requires ALBT
- Data distribution: Network fee is required to redistribute the data to ecosystem actors
- Data analysis: Obtained data analyzed using various mathematical methods requires ALBT
- Data replication: Network fee is required in order to replicate the data
- Data extraction: Network fee is required to extract the data to different data formats (e.g. .csv)
- Data validation: Compare two data sets to check data accuracy requires a network fee
About AllianceBlock
AllianceBlock is building the first globally compliant decentralized capital market. The AllianceBlock Protocol is a decentralized, blockchain-agnostic layer 2 that automates the process of converting any digital or crypto asset into a bankable product.
Incubated by three of Europe’s most prestigious incubators: Station F, L39, and Kickstart Innovation in Zurich, and led by a heavily experienced team of ex-JP Morgan, Barclays, BNP Paribas, Goldman Sachs investment bankers, and quants, AllianceBlock is on the path to disrupt the $100 trillion securities market with its state-of-the-art and globally compliant decentralized capital market.
Website | Telegram | Discord | CoinGecko | White Paper | Green Paper | Token Economics Paper