The Tokenomics of Knowledge

Photo by Timo Volz on Unsplash

Academic research is a noble cause which adds to the repository of public knowledge. But those who undertake academic research take on a lot of personal responsibility and, ultimately, a lot of risk.

  • Risky research can result in career ruin
  • Costly research may fail to raise the necessary funding
  • New discoveries may supersede existing findings

Creators should be directly incentivised to push the boundaries of human knowledge, but existing processes financially reward the big players while the authors generally miss out.

What if there was a way for researchers to recuperate personal and financial costs directly? Maybe even generate revenue from their work? Could researchers generate financial value from their work, even during the research process?

Research Tokenomics

Tokenomics introduces a new method of revenue generation or self-funding without the need of an intermediary or “middle-man”. In a similar way that cryptocurrencies take the bank out of the middle of a transaction between parties, research tokenisation would take corporate funders and publishers out of the academic process.

Micro-Payments for Cited Work

One example of this would be micro-payments for cited work. When an author publishes his/her work, the findings of that work is often used by other researchers in their studies, to validate certain assumptions – building upon the work of others rather than having to create concepts from scratch.

Research tokenomics would transfer a small amount of tokens to the original authors of the work every time it is referenced. The more useful or applicable the research, the more it is cited and the more tokens the authors can expect to earn. (Think of BAT* for content producers but in the academic space.)

(*BAT is Brave browser’s token. You can earn BAT by either watching ads or by authoring content. Others can contribute BAT when they consume content. This can either be a one-off payment or some kind of ongoing subscription. Instead of Google getting revenue for you consuming ads, or for you posting your content to Facebook who then monetise it, the end users are directly rewarded.)

The KnowledgeArc Network platform deploys smart contracts which track the citations of academic works and generate tokens, which are paid out to the original producers.

Researchers Could Raise Tokens Before Research Completed

Potentially, researchers could even raise tokens before and during the research process, introducing a funding dimension to the tokenomic model.

Ultimately, authors would be able to be rewarded for the huge burden they take on as creators of knowledge.

Find out more about how KnowledgeArc Network is revolutionising how researchers can directly profit from their work.

+Follow KnowledgeArc Network on LinkedIn

Coronavirus and The Paywall Dilemma (Information Series Part II)

As the Coronavirus crisis deepens, quality information is critical to individual, community, state and national preparedness. Staying informed should be easily available in the “digital age”, and it is, but with a considerable cost, both financially and in terms of human health.

Some very large publishers have managed to develop very large revenue streams by restricting access to valuable data. Using paywalls and subscription services, these organisations generate large revenue streams for material they do not author.

As the “middleman” they can charge sizeable access fees which are too costly for most individuals and smaller institutions, especially in developing countries.

Subscriptions also require a large upfront payment, something that’s unattractive to someone simply looking for a particular piece of information.

In recent years, there’s been growing concern around the monetisation of academic research which is:

a) in the best interests of the public

b) funded by the public purse

Europe has been taking a strong stance on ensuring publicly funded academic research be available for free and there has been increased scrutiny around the limitations of paywalls and other subscription-based models when accessing medical and other scientific research.

And Coronavirus has only reinforced the negative impact of paywalls on the dissemination of life-saving information and the real world implications it’s having on people’s ability to find quality research.

Researchers and authors do need to be compensated for their efforts but opportunistic “middle-men” should not be entitled to profiteer off of the hard work of others.

+Follow us on LinkedIn

What Covid-19 Has Taught Us About Knowledge Management (Information Series Part II)

One thing the Coronavirus outbreak has shown us is that getting quality information based on quantitative-based research and professional recommendations is key to ensure the public is well-informed, and fully educated about a wide-scale health issue (or any issue for that matter).

Photo by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash

Subject repositories (or discipline repositories) attempt to collect information based on academic research, about a particular subject or area of interest. They provide a one-stop for quality information, collating educational material, findings and other supporting documentation in a single location. Subject repositories should use well-researched scholarly information and this information should be verified for authenticity and its source should be easily tracked.

Subject repositories are even more important in a decentralised world. Information could be stored and hosted across a number of disparate systems – this is perfect for circumventing the influence over information by nefarious parties who are looking to either control the narrative, or benefit from either playing up or playing down its impact… But by its very nature, decentralised data is difficult to find, search across and extricate meaningful conclusions.

In a decentralised world, subject repositories will be the gathering points for various information from a wide range of sources. It will be more important than ever to attach a pseudonymous path to the original material to ensure both the integrity and truthfulness of the data while also ensuring that the privacy of the source is protected, especially in regimes which single out or punish purveyors of quality, scientific information.

KnowledgeArc Network offers some mind-blowing alternatives to the way ‘the asset of knowledge’ has been managed to date…

When data is archived on a blockchain the information remains:

1. Immutable – the data can never be changed or corrupted

2. Persistent – it will last forever

3. Unique – there is no other information like this, it’s the single source

4. Open – the data is publicly accessible so others can build on the knowledge created

Come +Follow KnowledgeArc Network on LinkedIn for the latest tech and team articles, giving information-power (and potentially tokens) back to the academic researcher and ultimately the communities (who need it most).

August in review – Knowledge Identifiers

For August in Review, we’ve been quietly working on our next product, Knowledge Identifiers, a decentralized proof of ownership of scholarly works. Knowledge Identifiers work like similar systems such as Digital Object Identifiers or Handle.net, but are not controlled by a single authority. Instead, a combination of smart contracts, decentralized file storage and database systems as well as traditional web apps will power this new permanent identification solution.

We will be making our functional specification for the development of Knowledge Identifiers publicly available, and we will post the link to this document shortly. We welcome participation in developing this new solution.

You can also follow our technical progress on Gitlab.

Other Developments for August in Review

We continue to discuss the coming decentralization of archived information with key players in the industry. We seek their expertise on how legacy systems function and how they can be improved through the use of blockchain technologies.

July 2019 in review

July may have been light on news but there have been a lot of developments which will improve KnowledgeArc.Network’s technology moving forward.

Using ARCH for covering Ethereum network costs

We have been investigating the concept of zero gas charges for our upcoming smart contracts. This means that you will not have to hold Ether, the default currency for handling any transaction fees on the Ethereum blockchain, when dealing with our smart contracts. Instead, all fees will be handled using Archive (ARCH) tokens which should aid in onboarding new users to the decentralized archive.

OrbitDB CLI

One of our developers has been working with the OrbitDB community to develop another way to communicate with the decentralized database system. For developers and technical users, you can find out more at https://github.com/orbitdb/go-orbit-db/.

Knowledge Identifiers

We’re working on a decentralized digital asset identification system using Ethereum smart contracts and OrbitDB. Knowledge Identifiers will provide an alternative to existing, centralized solutions such as Handle.net and DOI.

Such a system will provide immutable, permanent identification of digital assets, collections and even users in a trustless way, which means users won’t be beholden to a single point of failure; instead they will be able to manage their identifiers on chain with no 3rd party dependency.

This opens up exciting new use cases; identifiers will no longer simply be permanent links to an item. Instead they could potentially open up licensing, citation and other opportunities.

June 2019 in review

June was an important month in the evolution of KnowledgeArc.Network. We review some of the highlights from the month.

Whitepaper

We released our whitepaper early in June. This was an important step; even though we had been developing features and software for over two years, the whitepaper captured the reason behind KnowledgeArc.Network and distilled what our ecosystem is all about at a higher level.

Deploying our whitepaper to IPFS also highlighted our commitment to distributed technologies.

Exchange Listings

We’re committed to decentralization, distribution and democracy. Therefore, we are excited to see our cryptocurrency, Archive (ARCH), listed on two decentralized exchanges; SwitchDex and Ethermium.

We hope this will make it easier for our community to obtain Archive for ongoing development in the KnowledgeArc.Network ecosystem.

OrbitDB

It’s important for decentralized applications to move forward, and to be actively developed and supported. However, with dApps and other distributed applications being nascent technologies, not all of the underlying architecture is ready for production. As is often the case, software is still going through active development and requires a lot of resources to get it to a stable, production-ready state. This can mean that projects look stagnant even though developers are hard at work on various, related projects.

KnowledgeArc.Network is using IPFS as the underlying storage mechanism. This includes OrbitDB, a decentralized, peer-to-peer database system, which uses IPFS for replication. OrbitDB is a powerful technology and will be one of the cornerstones of the new Web3, similar to what MySQL did for the Internet v1.

OrbitDB will be KnowledgeArc.Network’s decentralized storage layer, storing metadata and other supporting information. The ecosystem will be able to replicate these OrbitDB data stores as well as combine them to form larger databases.

OrbitDB is under active development. That is why we have contributed time and resources to assist with the success of this project. Some of our work includes co-contributing to the HTTP API and field manual as well as maintaining the Go implementation of OrbitDB.

The KnowledgeArc.Network Working Group

We have started a working group, a place for advisors and experts to discuss ways to decentralize archiving, peer review and journalling.

During June, we invited some project managers and librarians who work in the archiving space to join our working group and we welcome these new members. We hope to expand this group of experts and look forward to seeing what insights they can provide to this new ecosystem.

Taking back ownership of your data

The convenience of hosted solutions for digital assets and archiving can hide a major problem; do you control the data you own? KnowledgeArc.Network’s decentralized architecture ensures you are in full control of your data.

Do you really own your data?

Hosting digital assets in the cloud has become a popular and cost-effective solution. But what happens when you decide the host you are with is no longer providing the level of service you expect?

You may think migration is as simple as your existing host dumping the data out to a backup file and making it available for your new provider to restore. Unfortunately, the reality isn’t that simple; closed source applications often have proprietary formats which make them difficult or even impossible to import into other systems.

On the other hand, some open source systems are customized, but the customizations might not be publicly available, so backups only capture a subset of your data. For example, there are archive hosting providers who have built multi-tenant data storage on top of a single application. Databases in such a system cannot simply be lifted and re-implemented on other infrastructure. This results in broken features and crucial data being excluded from the system.

Even if migrating from one system to another runs smoothly, complex backups and time-consuming debugging are often required. Export/import tools need constant maintenance, but with niche products such as digital asset systems, maintenance of these ancillary tools can often be ignored.

A distributed solution

The KnowledgeArc.Network platform makes centralized storage obsolete. Data is replicated in multiple locations whilst still being owned of the original creator.

Replication allows application managers, developers and system administrators to build a variety of user experiences on top of the data. There is no need to set up complex data structures, import and export data, or work around missing data. Instead, the user simply replicates an existing database and works directly on top of it.

Data can also remain private even though it is stored in a public way. By encrypting data, the owner is the only one with access to this information and can grant other users varying degrees of control. For example, perhaps certain users might only be able to read data. Others might be able to update existing data but not delete it.

Centralized vs decentralized

Recently there has been a move to more centralized archiving solutions. Instead of disparate systems talking to one another or federated systems being established to support a “go-to” repository of information, a number of governments and bureaucracies are pushing for everything to be centralized. This results in a stagnation of innovation and, more importantly, a single point of failure.

Figure 1: Legacy Archives

KnowledgeArc.Network decentralized databases will capture the best of both worlds; every archive is unique but their records can easily be merged into a single, federated archive. This federated archive can then be replicated further so that multiple user interfaces can be created on top of the same data.

KnowledgeArc.Network captures the best of every model. Decentralized, independent databases provide institutions with full control and ownership of their data. Federated archives simply merge distributed databases into a single data store. And, finally, the entire community can build their own user experiences on top of any archived data by simply replicating an existing database.

Figure 2: Decentralized Archive

The Decentralized Archive Journey Begins

At KnowledgeArc.Network, we believe that the publishing, dissemination and archiving or information needs to fundamentally change.

Information should be open and public. It should also incentivize a decentralized community to participate in the creation, review, licensing, verification and archiving of information.

A democratized ecosystem for peer review

A single entity should not control and decide what quality content can or cannot be peer reviewed and published. Large, well-funded institutions should not receive preferential treatment over smaller, less-funded ones. Instead, we believe the entire community can actively participate in the review and publishing process. The community can decide inclusion of a work based on its merits rather than the size of an institution’s reach and influence.

Your data held for ransom

The convenience of a third-party hosting provider can often mean you give up control of your data. If you decide to change hosts or move information to in-house infrastructure, you are reliant on your existing host to hand over all your data. Depending on your agreement with your host, it may not be possible to salvage it all.

KnowledgeArc.Network uses decentralized technologies to store, sign and verify your archived information. An archiving provider can no longer hold your data exclusively; you and others can replicate your data, even if it is private, whether it is to another hosting provider, an in-house server or even your local computer.

Multiple versions of the data also ensures there isn’t a single point of failure.

Incentivizing the community

Current solutions incentivize and reward middlemen, but it is the authors, reviewers, end-users and developers who create all of the information from which these middlemen profit.

KnowledgeArc.Network aims to incentivize the community and revenue will go directly to the participants of ecosystem. Citations and licensing will flow directly to the creators of works archived to the ecosystem through the use of automated agreements (smart contracts). Community members will conduct peer review, with smart contracts providing remuneration directly. Developers will have access to the entire system and will be able to create tools and processes which directly benefit all users. And users will be able to directly reward content creators for their contribution to the ecosystem.

Alternative statistics and metrics could even result in additional earnings for content creators as impact factor is monetized.

KnowledgeArc.Network whitepaper

We distilled our vision into our official whitepaper which is available for download.

Active development

The whitepaper is not the start of our development cycle. KnowledgeArc.Network has been in development for 2 years and momentum is growing.

We are integrating various technologies with our archiving platform and ecosystem and cultivating partnerships with other blockchain systems which we have identified as key to the evolution of the KnowledgeArc.Network ecosystem.

Tokenomics

The utility token, Archive (ARCH) powers the KnowledgeArc.Network for transactions within the decentralized ecosystem.

Community members participating in the ecosystem will be able to directly earn tokens; authors will earn through citations and licensing, peer reviewers through verifying the authenticity of works, developers by extending functionality and providing customizations and resource providers by providing solutions such as backups and hosting applications.

We are working on ways to make using Archive as easy as possible and are incentivizing key archiving players to embrace KnowledgeArc.Network and blockchain technologies to replace redundant solutions and methodologies.

Self-signed certificates with local root CA

This tutorial briefly covers the creating and trusting your own certificate authority (CA) for issuing self-signed SSL certificates, and is designed to work with OribitDB’s new REST API HTTP/2 push services.

This tutorial is aimed at Unix-based systems, in particular Ubuntu and other Debian-based Linux distributions so you will need to modify the commands for your own platform. All code examples are intended to be copied and pasted directly to the command line and will generate certificates in your current working directory.

To get started, we are going to create a root certificate which we will use to sign additional SSL certificates with.

First, create your root CA private key:

openssl genrsa -des3 -out rootSSL.key 2048
Generating RSA private key, 2048 bit long modulus
………………+++
………………………………………………………………………+++
e is 65537 (0x010001)
Enter pass phrase for rootSSL.key:

You will be prompted for a password. Be sure to specify one that is long enough as you may encounter errors if your password is too short.

Next, use your CA private key to create a root certificate:

openssl req -x509 -new -nodes -key rootSSL.key -sha256 -days 1024 -out rootSSL.pem

Once launched, you will need to re-enter the password you assigned to your private key:

Enter pass phrase for rootSSL.key:

If successful, provide information about your certificate:

 You are about to be asked to enter information that will be incorporated
 into your certificate request.
 What you are about to enter is what is called a Distinguished Name or a DN.
 There are quite a few fields but you can leave some blank
 For some fields there will be a default value,
 If you enter '.', the field will be left blank.
 Country Name (2 letter code) [AU]:
 State or Province Name (full name) [Some-State]:WA
 Locality Name (eg, city) []:
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:
 Organizational Unit Name (eg, section) []:
 Common Name (e.g. server FQDN or YOUR name) []:localhost
 Email Address []:

You are now ready to install the new CA certificate into your CA trust store. The following commands will copy the root certificate into Ubuntu’s CA store so you may need to modify the paths if you are on a different distribution or OS platform:

sudo mkdir /usr/local/share/ca-certificates/extra
sudo cp rootSSL.pem \/usr/local/share/ca-certificates/extra/rootSSL.crt
sudo update-ca-certificates

Now it is time to generate a certificate for your development environment. Create a private key for your new certificate:

openssl req \
 -new -sha256 -nodes \
 -out localhost.csr \
 -newkey rsa:2048 -keyout localhost.key \
 -subj "/C=AU/ST=WA/L=City/O=Organization/OU=OrganizationUnit/CN=localhost/emailAddress=demo@example.com"

Next, create the certificate, signing it with your Root CA:

openssl x509 \
 -req \
 -in localhost.csr \
 -CA rootSSL.pem -CAkey rootSSL.key -CAcreateserial \
 -out localhost.crt \
 -days 500 \
 -sha256 \
 -extfile <(echo " \
    [ v3_ca ]\n \
    authorityKeyIdentifier=keyid,issuer\n \
    basicConstraints=CA:FALSE\n \
    keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment\n \
    subjectAltName=DNS:localhost \
   ")

Your SSL certificate is now ready for use. To use it with OrbitDB’s REST API, launch the cli.js script with the flags –https-key and –https-cert, using the new localhost.key and localhost.crt files we just created:

node src/cli.js api --ipfs-host localhost --orbitdb-dir ./orbitdb --https-cert ./localhost.crt --https-key ./localhost.key

The certificates should validate against your Root CA when used with tools such as curl:

curl -vs --http2 -X POST https://localhost:3000/db/my-feed --data 'create=true' --data 'type=feed'
 successfully set certificate verify locations:
 CAfile: /etc/ssl/certs/ca-certificates.crt
 CApath: /etc/ssl/certs
 ...
 SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
 ALPN, server accepted to use h2
 Server certificate:
 subject: C=AU; ST=WA; L=Ellenbrook; O=Organization; OU=OrganizationUnit; CN=localhost; emailAddress=demo@example.com
 start date: May 25 14:56:35 2019 GMT
 expire date: Oct  6 14:56:35 2020 GMT
 common name: localhost (matched)
 issuer: C=AU; ST=WA; L=Ellenbrook; O=Internet Widgits Pty Ltd; CN=Local Certificate
 SSL certificate verify ok. 

In the above, you can see the CA being loaded from the correct location (/etc/ssl/certs) and details about the certificate (Server certificate:).

You can now successfully run the new OrbitDB HTTP API with self-signed certificates on your development environment.

References

How to get HTTPS working in localhost development environment, secureend.com, https://reactpaths.com/how-to-get-https-working-in-localhost-development-environment-f17de34af046

OrbitDB HTTP API – A Practical Guide

OrbitDB is the distributed, p2p database system which will revolutionize the way we store, replicate, and disseminate information and will become the cornerstone of any dApp which requires data storage.

Too much centralization has put control of the internet into the hands of a few. Web3 aims to decentralize the internet, providing a more democratized, distributed ecosystem.

Most of the hype is around cryptocurrencies and the freedoms they will bring, cutting out the middleman and putting control back in the hands of the people. However, there are a number of less “high-profile” but equally game changing projects which will reshape the internet as we know it.

This how-to briefly covers the basics of how to create an OrbitDB database and store data as well as introduce the most powerful feature of OrbitDB; replicating the data across multiple locations.

OrbitDB, the decentralized database

OrbitDB is a decentralized database system which uses IPFS for distributing stored data via P2P technologies. Storing data in OrbitDB ensures high availability and low latency due to the nature of distributed architectures such as IPFS.

Originally, OrbitDB was available as a Node.js library, so usage was limited to Node-based applications. However, with the recent release of the OrbitDB REST API, any language which supports REST calls can leverage this distributed database.

Setting up

Running an OrbitDB REST server is relatively straight-forward but some knowledge or working on the command line will be required. These steps assume you are running Linux or some other Unix-based operating system. For Windows users, you will need to translate the commands to your environment.

Prerequisites

Firstly, this guide assumes you can use a command line and install software. You don’t need to know node.js or how peer-to-peer systems work but you will need to be able to execute commands in a terminal. In this guide, all commands will be run from the terminal and will be represented like so:

type commands at the command line

You will also need two machines running since we will be replicating a decentralized database. This can either be two physical computers, a couple of virtual machines or docker containers.

Lastly, because the OrbitDB server uses Node.js you will also need npm (bundled with Node.js) to install the dependencies. This tutorial will not cover the installation and configuration of these requirements.

Running IPFS

OrbitDB uses IPFS to distribute and replicate data stores. The OrbitDB HTTP server runs in one of two modes; local or api.

When run in Local mode, OrbitDB will run its own IPFS node. When run in api mode, OrbitDB will connect to an already-running IPFS node.

For this tutorial we will connect to a running IPFS daemon and will assume you already have this installed. You will also want to run IPFS daemon with pubsub enabled.

Start your first IPFS daemon by running:

ipfs daemon --enable-pubsub-experiment

Building the REST server

Now get a copy of the code. You can grab it via Github at https://github.com/orbitdb/orbit-db-http-api:

wget https://github.com/orbitdb/orbit-db-http-api.zip

Alternatively, you can clone the git repo:

git clone https://github.com/orbitdb/orbit-db-http-api.git

Install your dependencies:

npm install

Setting up the SSL certificates

The latest version of the OrbitDB HTTP API incorporates HTTP/2. Therefore, to run the server, you will need to generate SSL certificates.

There are a couple of options available for obtaining certificates; you can issue a certificate using a certificate authority such as Let’s Encrypt, or, you can become your own CA. For development environments, the second option may be better and a thorough overview on how to do this is covered by the tutorial Self-signed certificates with local root CA.

The rest of this guide will assume you have a trusted SSL certificate set up and that curl will use your trust store to validate the certificate. If not, you will need to tell curl to ignore the certificate verification by passing the -k flag:

curl -k -X GET ...

Up and Running

Starting the HTTP API server

Start up the OrbitDB server and connect to your running ipfs:

node src/cli.js api --ipfs-host localhost --orbitdb-dir ./orbitdb --https-key localhost.key --https-cert localhost.crt

The –https-key and –https-cert options above assume you are using the certificate and key generated from the tutorial Self-signed certificates with local root CA. If not, replace with your own certificate and key.

Consuming our first request

The REST server is now running. You can test this by running something simple (we are going to use cURL to run the rest of these command so make sure you have it installed):

curl -X GET http://localhost:3000/identity

This will return a JSON string representing your OrbitDB node’s identity information. This includes your public key (which we will use later).

Create a database

Creating a data store is very easy with the REST API and you can launch a store based on any of the supported types. For example, you can create a feed data store by running:

curl -X POST http://localhost:3000/db/my-feed --data 'create=true' --data 'type=feed'

You can also use JSON to specify the initial data store features:

curl -X POST http://localhost:3000/db/my-feed -H "Content-Type: application/json" --data '{"create":"true","type":"feed"}'

Add some data

Let’s add some data to our feed:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "A beginner's guide to OrbitDB REST API"

And viewing the data we have just added:

curl -X GET  http://localhost:3000/db/my-feed/all

["A beginner's guide to OrbitDB REST API"]

Be aware that there are two different endpoints for sending data to the store, and which endpoint you use will depend on the store’s type. For example you will need to call /put when adding data to a docstore.

Replicating

Replicating is where the real power of distribution lies with OrbitDB. Replication is as simple as running an OrbitDB REST node on another machine.

Assuming you have a second computer which is accessible over your intranet or via Docker or a virtual machine, you can replicate the my-feed feed data store.

Getting ready to replicate

Before you replicate your feed data store, you will need to make a note of its address. You can do this by querying the data store’s details:

curl http://localhost:3000/db/my-feed

{"address":{"root":"zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw","path":"my-feed"},"dbname":"my-feed","id":"/orbitdb/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw/my-feed","options":{"create":"true","localOnly":false,"maxHistory":-1,"overwrite":true,"replicate":true},"type":"feed","capabilities":["add","get","iterator","remove"]}

Copy the id. We’re going to use it in the next step.

Running another copy of the data store

On your second machine, make sure you have IPFS running and the OrbitDB REST server installed and running.

Replicating the my-feed data simply requires you query the first machine’s my-feed data store using the full address. Using the address of the my-feed data store I queried earlier, request the data:

curl http://localhost:3000/db/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw%2Fmy-feed/all

["A beginner's guide to OrbitDB REST API"]

You may need to run the curl call a couple of time; OrbitDB will take a small amount of time to replicate the data over.

There are two important things to note about the address; 1) we drop the /orbitdb/ prefix and 2) we need to url encode the /. The html encoded value of / is %2F.

And that’s it. You have successfully created a new OrbitDB data store on one machine and replicated across another.

Let’s test it out. Back on your first machine, add another entry to the feed data store:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Learning about IPFS"

On your second machine, retrieve the feed list again:

curl http://localhost:3000/db/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw%2Fmy-feed/all

["A beginner's guide to OrbitDB REST API","Learning about IPFS"]

Adding data in a decentralized environment

What happens if you want to add more entries to the my-feed data store from your second machine:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Adding an item from the second OrbitDB REST peer."
{"statusCode":500,"error":"Internal Server Error","message":"Error: Could not append entry, key \"03cc598325319e6c07594b50880747604d17e2be36ba8774cd2ccce44e125da236\" is not allowed to write to the log"}

If you check the output from your REST server you will see a permissions error. By default, any replicating node will not be able to write back to the data store. Instead, we have tell the originating OrbitDB instance that the second instance can also write to the my-feed data store. To do this, we must manually add the public key of the second OrbitDB instance to the first instance.

It is important to note that the data store must be created with an access controller pre-specified. Start by deleting the data store on the first machine:

curl -X DELETE http://localhost:3000/db/my-feed

We must now set up the my-feed database again and add some data:

curl -X POST http://localhost:3000/db/feed.new -H "Content-Type: application/json" --data '{"create":"true","type":"feed","accessController":{"type": "orbitdb","write": ["048161d9685991dc87f3e049aa04b1da461fdc5f8a280ed6234fa41c0f9bc98a1ce91f07494584a45b97160ac818e100a6b27777e0b1b09e6ba4795dcc797a6d8b"]}}'

Note the accessController property; this specify the controller type and the key which can write to the database. In this case it is the first machine’s public key, which can be retrieved by running:

curl http://localhost:3000/identity

On the second machine, retrieve the public key:

curl http://localhost:3000/identity

Grab the publicKey value. We will now enable write access to the my-feed database:

curl -X PUT http://localhost:3000/db/feed.new/access/write --data 'publicKey=04072d1bdd0e5e43d9e10619d997f6293f4759959e19effb958785b7f08413fb81501496a043385c245dedc952ee01c06bc9c654afe79b11dd5f130796baf8d2da'

publicKey will be the publicKey of the second machine. We must execute this request from the first machine because only the first machine currently has write permissions to the data store.

With the second machine’s publickey added, we can go ahead and add a new my-feed from the second machine:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Adding an item from the second OrbitDB REST peer."

Conclusion

This brief introduction to the new OrbitDB HTTP API will hopefully provide some insights into how OrbitDB functions and will hopefully highlight some of the benefits a distributed database system brings to the decentralized web.

We have only scratched the surface of what is possible with OrbitDB. You could go ahead and add other machines to my-feed’s write access controller or create different data stores for storing data in different formats. Also the HTTP API is only in its infancy and there are a number of new features being actively developed.

This new chapter in OrbitDB’s brief history is going to bring a lot of new development and providing access to other languages will expand its usability.