August 2019 in review

We’ve been quietly working on our next product, Knowledge Identifiers, a decentralized proof of ownership of scholarly works. Knowledge Identifiers work like similar systems such as Digital Object Identifiers or Handle.net, but are not controlled by a single authority. Instead, a combination of smart contracts, decentralized file storage and database systems as well as traditional web apps will power this new permanent identification solution.

We will be making our functional specification for the development of Knowledge Identifiers publicly available, and we will post the link to this document shortly. We welcome participation in developing this new solution.

You can also follow our technical progress on Gitlab.

Other Developments

We continue to discuss the coming decentralization of archived information with key players in the industry. We seek their expertise on how legacy systems function and how they can be improved through the use of blockchain technologies.

Combat misconduct in research using blockchain

There are issues to be solved in academia. Some claim, and research shows this to be true, that misconduct in research has increased. Can blockchain be a part of the solution to reduce this behavior? How can we combat misconduct in research using blockchain?

Before we answer this question we need to look at the concept of misconduct. What is misconduct in research?

What is misconduct in research?

Misconduct in Research
Photo by Rohan Makhecha on Unsplash

Misconduct in research can be many things; it can be premeditated or as the result of lack of knowledge. Both cases are harmful in many ways to society, research, and individuals. Some examples of misconduct can be cheating with data, plagiarism or duplications, to mention some. This misconduct is severe and would affect academia in many ways, direct and indirectly. As it is today, there are few mechanisms to identify and react against misconduct.

One paper where we can read about the increased academic misconduct is in Vijay Moans article “On the use of blockchain-based mechanisms to tackle academic misconduct” (Mohan, 2019). Here you can get some background as to how he believes the “winner takes it all” contest-like situation in academia is building a ground for more incidents of misconduct. The article provides a good framework to understand the situation.

Back to the blockchain. One of the solutions to combat misconduct ,he suggests, is to use blockchain technology. In short, his idea is that blockchain can provide methods and technology for alleviating problems with the academic publishing industry. The idea is that blockchain can work as a monitoring technology, and thus be a part of a solution to increase the probability that misconduct will be detected. According to Mohan, there are not enough of such monitoring platforms today.

Read the article by Vijay Mohan published in Research Policy 48 (2019) (subscription journal)

KnowledgeArc and blockchain

At KnowledgeArc we believe in blockchain. Firstly, we believe the technology can be a way to meet the challenges of open science. Secondly, we believe the technology can be a game changer when it comes to changing the focus from quantity to quality in publication. And lastly, we believe blockchain can add security and full openness to the equation. With this in mind we also believe that blockchain can be one of several initiatives which can contribute to improved academic processes.

These are just some of the ways we can combat misconduct in research using blockchain. What are your thoughts in this regard? Leave comments below…

Read more about our blockchain developments here.

July 2019 in review

July may have been light on news but there have been a lot of developments which will improve KnowledgeArc.Network’s technology moving forward.

Using ARCH for covering Ethereum network costs

We have been investigating the concept of zero gas charges for our upcoming smart contracts. This means that you will not have to hold Ether, the default currency for handling any transaction fees on the Ethereum blockchain, when dealing with our smart contracts. Instead, all fees will be handled using Archive (ARCH) tokens which should aid in onboarding new users to the decentralized archive.

OrbitDB CLI

One of our developers has been working with the OrbitDB community to develop another way to communicate with the decentralized database system. For developers and technical users, you can find out more at https://github.com/orbitdb/go-orbit-db/.

Knowledge Identifiers

We’re working on a decentralized digital asset identification system using Ethereum smart contracts and OrbitDB. Knowledge Identifiers will provide an alternative to existing, centralized solutions such as Handle.net and DOI.

Such a system will provide immutable, permanent identification of digital assets, collections and even users in a trustless way, which means users won’t be beholden to a single point of failure; instead they will be able to manage their identifiers on chain with no 3rd party dependency.

This opens up exciting new use cases; identifiers will no longer simply be permanent links to an item. Instead they could potentially open up licensing, citation and other opportunities.

Identifying academic content using the blockchain

The KnowledgeArc.Network blockchain developers have been working hard over the summer. Here is some news on what they have been working on.

Identifiers

One of the ways we believe the blockchain can add real value to the scientific process is to have stable, permanent and open identifiers. Therefore, we are working on how to implement stable identifiers, like author ids or persistent identifiers for academic content.

Currently, if your 3rd party identification provider stops working, introduces a bug or simply decides they don’t want to be in business any more, your identifiers will be lost forever. Moving identifiers to the blockchain ensures true permanency and full ownership by you.

Read more about our blockchain development in our blog.

You can also check out our whitepaper which we launched in June.

June 2019 in review

June was an important month in the evolution of KnowledgeArc.Network. We review some of the highlights from the month.

Whitepaper

We released our whitepaper early in June. This was an important step; even though we had been developing features and software for over two years, the whitepaper captured the reason behind KnowledgeArc.Network and distilled what our ecosystem is all about at a higher level.

Deploying our whitepaper to IPFS also highlighted our commitment to distributed technologies.

Exchange Listings

We’re committed to decentralization, distribution and democracy. Therefore, we are excited to see our cryptocurrency, Archive (ARCH), listed on two decentralized exchanges; SwitchDex and Ethermium.

We hope this will make it easier for our community to obtain Archive for ongoing development in the KnowledgeArc.Network ecosystem.

OrbitDB

It’s important for decentralized applications to move forward, and to be actively developed and supported. However, with dApps and other distributed applications being nascent technologies, not all of the underlying architecture is ready for production. As is often the case, software is still going through active development and requires a lot of resources to get it to a stable, production-ready state. This can mean that projects look stagnant even though developers are hard at work on various, related projects.

KnowledgeArc.Network is using IPFS as the underlying storage mechanism. This includes OrbitDB, a decentralized, peer-to-peer database system, which uses IPFS for replication. OrbitDB is a powerful technology and will be one of the cornerstones of the new Web3, similar to what MySQL did for the Internet v1.

OrbitDB will be KnowledgeArc.Network’s decentralized storage layer, storing metadata and other supporting information. The ecosystem will be able to replicate these OrbitDB data stores as well as combine them to form larger databases.

OrbitDB is under active development. That is why we have contributed time and resources to assist with the success of this project. Some of our work includes co-contributing to the HTTP API and field manual as well as maintaining the Go implementation of OrbitDB.

The KnowledgeArc.Network Working Group

We have started a working group, a place for advisors and experts to discuss ways to decentralize archiving, peer review and journalling.

During June, we invited some project managers and librarians who work in the archiving space to join our working group and we welcome these new members. We hope to expand this group of experts and look forward to seeing what insights they can provide to this new ecosystem.

Archives on the chain

We have all heard about the blockchain, or at least we have heard about bitcoin and other digital currencies which experienced another cycle of hype a couple a years ago. As with the cycles before it, the hype died down, and some made profits while others were left with little (or even no) value after investing in poorly managed crypto projects. How can archives on the chain rise up out of this situation?

Archives on the chain - Photo by Clint Adair on Unsplash

At KnowledgeArc we believe that both cryptocurrency and the blockchain, the technology behind cryptocurrency, can be used for many purposes in the real world, for instance, opening up scientific research and discovery. One area where blockchain can make a difference is within the open archive area. The technology currently use to store, share and open up knowledge is not particularly robust and most archives are easily manipulated, making them neither immutable nor permanent. They are also not really as open as we would like them to be. By using new technology like blockchain to build our archive ecology, we believe we can change the archiving to be more open, distributed and democratized.

Making Archives Open and Secure

First of all we believe we can make the archives truly open. Today we are dependent on Google, Amazon or other proprietary actors to store the large amounts of data that we want to archive. As long as we depend on these actors, our archives are not be truly open because our information lies in the hands of a third, commercial party.

We believe that a the solution to this problem is to store the information using p2p (peer to peer) file sharing software. This is a distributed software similar to BitTorrent where files are spread over the network. This would make centralized third parties redundant, and our information will still be safe and available. By doing it this way we return to the beginnings of the internet where openness was the norm, only with new and powerful technology.

But this is not blockchain technology, so where does the blockchain fit into archives on the chain?

The blockchain is the next step in our archive solution, and the step that we mean would solve the safety issues that we are facing in today’s archive solutions. By referencing information about the storage of assets and metadata in an Ethereum smart contract it will stay safe, be impossible to change and will be permanent. This creates a new level of safety within our systems, providing true immutability and permanence; a major contrast to the unsafe archives we have today.

Examples of information that could be stored on chain include handle, DOI or other item identification information, unique author IDs and other data which is important to keep safe and immutable.

Saving Money?

Blockchain, cryptocurrencies and p2p technologies democratize the archiving space. Anyone, will be able to run a low cost archiving node on a desktop computer, laptop or even, possibly, their mobile phone. Archives on the chain will be able to “push” information to other archives, consolidating disparate data in easy-to-digest, federated databases.

In-house servers or cloud infrastructures are no longer needed for simple archiving. Instead, funding can be focussed on building better user experiences, making information easier to find, consume and share.

Decentralized marketplaces will provide competitively priced backup solutions, peer review, journaling and other archive-focussed solutions.

As centralized services become redundant, more democratized solutions will drive prices down. Many of today’s archiving requirements will become automated or will only require a one-off payment to store something permanently.

 Read more about the technical work we are doing on building an archive on the chain.

Taking back ownership of your data

The convenience of hosted solutions for digital assets and archiving can hide a major problem; do you control the data you own? KnowledgeArc.Network’s decentralized architecture ensures you are in full control of your data.

Do you really own your data?

Hosting digital assets in the cloud has become a popular and cost-effective solution. But what happens when you decide the host you are with is no longer providing the level of service you expect?

You may think migration is as simple as your existing host dumping the data out to a backup file and making it available for your new provider to restore. Unfortunately, the reality isn’t that simple; closed source applications often have proprietary formats which make them difficult or even impossible to import into other systems.

On the other hand, some open source systems are customized, but the customizations might not be publicly available, so backups only capture a subset of your data. For example, there are archive hosting providers who have built multi-tenant data storage on top of a single application. Databases in such a system cannot simply be lifted and re-implemented on other infrastructure. This results in broken features and crucial data being excluded from the system.

Even if migrating from one system to another runs smoothly, complex backups and time-consuming debugging are often required. Export/import tools need constant maintenance, but with niche products such as digital asset systems, maintenance of these ancillary tools can often be ignored.

A distributed solution

The KnowledgeArc.Network platform makes centralized storage obsolete. Data is replicated in multiple locations whilst still being owned of the original creator.

Replication allows application managers, developers and system administrators to build a variety of user experiences on top of the data. There is no need to set up complex data structures, import and export data, or work around missing data. Instead, the user simply replicates an existing database and works directly on top of it.

Data can also remain private even though it is stored in a public way. By encrypting data, the owner is the only one with access to this information and can grant other users varying degrees of control. For example, perhaps certain users might only be able to read data. Others might be able to update existing data but not delete it.

Centralized vs decentralized

Recently there has been a move to more centralized archiving solutions. Instead of disparate systems talking to one another or federated systems being established to support a “go-to” repository of information, a number of governments and bureaucracies are pushing for everything to be centralized. This results in a stagnation of innovation and, more importantly, a single point of failure.

Figure 1: Legacy Archives

KnowledgeArc.Network decentralized databases will capture the best of both worlds; every archive is unique but their records can easily be merged into a single, federated archive. This federated archive can then be replicated further so that multiple user interfaces can be created on top of the same data.

KnowledgeArc.Network captures the best of every model. Decentralized, independent databases provide institutions with full control and ownership of their data. Federated archives simply merge distributed databases into a single data store. And, finally, the entire community can build their own user experiences on top of any archived data by simply replicating an existing database.

Figure 2: Decentralized Archive

The Decentralized Archive Journey Begins

At KnowledgeArc.Network, we believe that the publishing, dissemination and archiving or information needs to fundamentally change.

Information should be open and public. It should also incentivize a decentralized community to participate in the creation, review, licensing, verification and archiving of information.

A democratized ecosystem for peer review

A single entity should not control and decide what quality content can or cannot be peer reviewed and published. Large, well-funded institutions should not receive preferential treatment over smaller, less-funded ones. Instead, we believe the entire community can actively participate in the review and publishing process. The community can decide inclusion of a work based on its merits rather than the size of an institution’s reach and influence.

Your data held for ransom

The convenience of a third-party hosting provider can often mean you give up control of your data. If you decide to change hosts or move information to in-house infrastructure, you are reliant on your existing host to hand over all your data. Depending on your agreement with your host, it may not be possible to salvage it all.

KnowledgeArc.Network uses decentralized technologies to store, sign and verify your archived information. An archiving provider can no longer hold your data exclusively; you and others can replicate your data, even if it is private, whether it is to another hosting provider, an in-house server or even your local computer.

Multiple versions of the data also ensures there isn’t a single point of failure.

Incentivizing the community

Current solutions incentivize and reward middlemen, but it is the authors, reviewers, end-users and developers who create all of the information from which these middlemen profit.

KnowledgeArc.Network aims to incentivize the community and revenue will go directly to the participants of ecosystem. Citations and licensing will flow directly to the creators of works archived to the ecosystem through the use of automated agreements (smart contracts). Community members will conduct peer review, with smart contracts providing remuneration directly. Developers will have access to the entire system and will be able to create tools and processes which directly benefit all users. And users will be able to directly reward content creators for their contribution to the ecosystem.

Alternative statistics and metrics could even result in additional earnings for content creators as impact factor is monetized.

KnowledgeArc.Network whitepaper

We distilled our vision into our official whitepaper which is available for download.

Active development

The whitepaper is not the start of our development cycle. KnowledgeArc.Network has been in development for 2 years and momentum is growing.

We are integrating various technologies with our archiving platform and ecosystem and cultivating partnerships with other blockchain systems which we have identified as key to the evolution of the KnowledgeArc.Network ecosystem.

Tokenomics

The utility token, Archive (ARCH) powers the KnowledgeArc.Network for transactions within the decentralized ecosystem.

Community members participating in the ecosystem will be able to directly earn tokens; authors will earn through citations and licensing, peer reviewers through verifying the authenticity of works, developers by extending functionality and providing customizations and resource providers by providing solutions such as backups and hosting applications.

We are working on ways to make using Archive as easy as possible and are incentivizing key archiving players to embrace KnowledgeArc.Network and blockchain technologies to replace redundant solutions and methodologies.

Self-signed certificates with local root CA

This tutorial briefly covers the creating and trusting your own certificate authority (CA) for issuing self-signed SSL certificates, and is designed to work with OribitDB’s new REST API HTTP/2 push services.

This tutorial is aimed at Unix-based systems, in particular Ubuntu and other Debian-based Linux distributions so you will need to modify the commands for your own platform. All code examples are intended to be copied and pasted directly to the command line and will generate certificates in your current working directory.

To get started, we are going to create a root certificate which we will use to sign additional SSL certificates with.

First, create your root CA private key:

openssl genrsa -des3 -out rootSSL.key 2048
Generating RSA private key, 2048 bit long modulus
………………+++
………………………………………………………………………+++
e is 65537 (0x010001)
Enter pass phrase for rootSSL.key:

You will be prompted for a password. Be sure to specify one that is long enough as you may encounter errors if your password is too short.

Next, use your CA private key to create a root certificate:

openssl req -x509 -new -nodes -key rootSSL.key -sha256 -days 1024 -out rootSSL.pem

Once launched, you will need to re-enter the password you assigned to your private key:

Enter pass phrase for rootSSL.key:

If successful, provide information about your certificate:

 You are about to be asked to enter information that will be incorporated
 into your certificate request.
 What you are about to enter is what is called a Distinguished Name or a DN.
 There are quite a few fields but you can leave some blank
 For some fields there will be a default value,
 If you enter '.', the field will be left blank.
 Country Name (2 letter code) [AU]:
 State or Province Name (full name) [Some-State]:WA
 Locality Name (eg, city) []:
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:
 Organizational Unit Name (eg, section) []:
 Common Name (e.g. server FQDN or YOUR name) []:localhost
 Email Address []:

You are now ready to install the new CA certificate into your CA trust store. The following commands will copy the root certificate into Ubuntu’s CA store so you may need to modify the paths if you are on a different distribution or OS platform:

sudo mkdir /usr/local/share/ca-certificates/extra
sudo cp rootSSL.pem \/usr/local/share/ca-certificates/extra/rootSSL.crt
sudo update-ca-certificates

Now it is time to generate a certificate for your development environment. Create a private key for your new certificate:

openssl req \
 -new -sha256 -nodes \
 -out localhost.csr \
 -newkey rsa:2048 -keyout localhost.key \
 -subj "/C=AU/ST=WA/L=City/O=Organization/OU=OrganizationUnit/CN=localhost/emailAddress=demo@example.com"

Next, create the certificate, signing it with your Root CA:

openssl x509 \
 -req \
 -in localhost.csr \
 -CA rootSSL.pem -CAkey rootSSL.key -CAcreateserial \
 -out localhost.crt \
 -days 500 \
 -sha256 \
 -extfile <(echo " \
    [ v3_ca ]\n \
    authorityKeyIdentifier=keyid,issuer\n \
    basicConstraints=CA:FALSE\n \
    keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment\n \
    subjectAltName=DNS:localhost \
   ")

Your SSL certificate is now ready for use. To use it with OrbitDB’s REST API, launch the cli.js script with the flags –https-key and –https-cert, using the new localhost.key and localhost.crt files we just created:

node src/cli.js api --ipfs-host localhost --orbitdb-dir ./orbitdb --https-cert ./localhost.crt --https-key ./localhost.key

The certificates should validate against your Root CA when used with tools such as curl:

curl -vs --http2 -X POST https://localhost:3000/db/my-feed --data 'create=true' --data 'type=feed'
 successfully set certificate verify locations:
 CAfile: /etc/ssl/certs/ca-certificates.crt
 CApath: /etc/ssl/certs
 ...
 SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
 ALPN, server accepted to use h2
 Server certificate:
 subject: C=AU; ST=WA; L=Ellenbrook; O=Organization; OU=OrganizationUnit; CN=localhost; emailAddress=demo@example.com
 start date: May 25 14:56:35 2019 GMT
 expire date: Oct  6 14:56:35 2020 GMT
 common name: localhost (matched)
 issuer: C=AU; ST=WA; L=Ellenbrook; O=Internet Widgits Pty Ltd; CN=Local Certificate
 SSL certificate verify ok. 

In the above, you can see the CA being loaded from the correct location (/etc/ssl/certs) and details about the certificate (Server certificate:).

You can now successfully run the new OrbitDB HTTP API with self-signed certificates on your development environment.

References

How to get HTTPS working in localhost development environment, secureend.com, https://reactpaths.com/how-to-get-https-working-in-localhost-development-environment-f17de34af046

OrbitDB HTTP API – A Practical Guide

OrbitDB is the distributed, p2p database system which will revolutionize the way we store, replicate, and disseminate information and will become the cornerstone of any dApp which requires data storage.

Too much centralization has put control of the internet into the hands of a few. Web3 aims to decentralize the internet, providing a more democratized, distributed ecosystem.

Most of the hype is around cryptocurrencies and the freedoms they will bring, cutting out the middleman and putting control back in the hands of the people. However, there are a number of less “high-profile” but equally game changing projects which will reshape the internet as we know it.

This how-to briefly covers the basics of how to create an OrbitDB database and store data as well as introduce the most powerful feature of OrbitDB; replicating the data across multiple locations.

OrbitDB, the decentralized database

OrbitDB is a decentralized database system which uses IPFS for distributing stored data via P2P technologies. Storing data in OrbitDB ensures high availability and low latency due to the nature of distributed architectures such as IPFS.

Originally, OrbitDB was available as a Node.js library, so usage was limited to Node-based applications. However, with the recent release of the OrbitDB REST API, any language which supports REST calls can leverage this distributed database.

Setting up

Running an OrbitDB REST server is relatively straight-forward but some knowledge or working on the command line will be required. These steps assume you are running Linux or some other Unix-based operating system. For Windows users, you will need to translate the commands to your environment.

Prerequisites

Firstly, this guide assumes you can use a command line and install software. You don’t need to know node.js or how peer-to-peer systems work but you will need to be able to execute commands in a terminal. In this guide, all commands will be run from the terminal and will be represented like so:

type commands at the command line

You will also need two machines running since we will be replicating a decentralized database. This can either be two physical computers, a couple of virtual machines or docker containers.

Lastly, because the OrbitDB server uses Node.js you will also need npm (bundled with Node.js) to install the dependencies. This tutorial will not cover the installation and configuration of these requirements.

Running IPFS

OrbitDB uses IPFS to distribute and replicate data stores. The OrbitDB HTTP server runs in one of two modes; local or api.

When run in Local mode, OrbitDB will run its own IPFS node. When run in api mode, OrbitDB will connect to an already-running IPFS node.

For this tutorial we will connect to a running IPFS daemon and will assume you already have this installed. You will also want to run IPFS daemon with pubsub enabled.

Start your first IPFS daemon by running:

ipfs daemon --enable-pubsub-experiment

Building the REST server

Now get a copy of the code. You can grab it via Github at https://github.com/orbitdb/orbit-db-http-api:

wget https://github.com/orbitdb/orbit-db-http-api.zip

Alternatively, you can clone the git repo:

git clone https://github.com/orbitdb/orbit-db-http-api.git

Install your dependencies:

npm install

Setting up the SSL certificates

The latest version of the OrbitDB HTTP API incorporates HTTP/2. Therefore, to run the server, you will need to generate SSL certificates.

There are a couple of options available for obtaining certificates; you can issue a certificate using a certificate authority such as Let’s Encrypt, or, you can become your own CA. For development environments, the second option may be better and a thorough overview on how to do this is covered by the tutorial Self-signed certificates with local root CA.

The rest of this guide will assume you have a trusted SSL certificate set up and that curl will use your trust store to validate the certificate. If not, you will need to tell curl to ignore the certificate verification by passing the -k flag:

curl -k -X GET ...

Up and Running

Starting the HTTP API server

Start up the OrbitDB server and connect to your running ipfs:

node src/cli.js api --ipfs-host localhost --orbitdb-dir ./orbitdb --https-key localhost.key --https-cert localhost.crt

The –https-key and –https-cert options above assume you are using the certificate and key generated from the tutorial Self-signed certificates with local root CA. If not, replace with your own certificate and key.

Consuming our first request

The REST server is now running. You can test this by running something simple (we are going to use cURL to run the rest of these command so make sure you have it installed):

curl -X GET http://localhost:3000/identity

This will return a JSON string representing your OrbitDB node’s identity information. This includes your public key (which we will use later).

Create a database

Creating a data store is very easy with the REST API and you can launch a store based on any of the supported types. For example, you can create a feed data store by running:

curl -X POST http://localhost:3000/db/my-feed --data 'create=true' --data 'type=feed'

You can also use JSON to specify the initial data store features:

curl -X POST http://localhost:3000/db/my-feed -H "Content-Type: application/json" --data '{"create":"true","type":"feed"}'

Add some data

Let’s add some data to our feed:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "A beginner's guide to OrbitDB REST API"

And viewing the data we have just added:

curl -X GET  http://localhost:3000/db/my-feed/all

["A beginner's guide to OrbitDB REST API"]

Be aware that there are two different endpoints for sending data to the store, and which endpoint you use will depend on the store’s type. For example you will need to call /put when adding data to a docstore.

Replicating

Replicating is where the real power of distribution lies with OrbitDB. Replication is as simple as running an OrbitDB REST node on another machine.

Assuming you have a second computer which is accessible over your intranet or via Docker or a virtual machine, you can replicate the my-feed feed data store.

Getting ready to replicate

Before you replicate your feed data store, you will need to make a note of its address. You can do this by querying the data store’s details:

curl http://localhost:3000/db/my-feed

{"address":{"root":"zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw","path":"my-feed"},"dbname":"my-feed","id":"/orbitdb/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw/my-feed","options":{"create":"true","localOnly":false,"maxHistory":-1,"overwrite":true,"replicate":true},"type":"feed","capabilities":["add","get","iterator","remove"]}

Copy the id. We’re going to use it in the next step.

Running another copy of the data store

On your second machine, make sure you have IPFS running and the OrbitDB REST server installed and running.

Replicating the my-feed data simply requires you query the first machine’s my-feed data store using the full address. Using the address of the my-feed data store I queried earlier, request the data:

curl http://localhost:3000/db/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw%2Fmy-feed/all

["A beginner's guide to OrbitDB REST API"]

You may need to run the curl call a couple of time; OrbitDB will take a small amount of time to replicate the data over.

There are two important things to note about the address; 1) we drop the /orbitdb/ prefix and 2) we need to url encode the /. The html encoded value of / is %2F.

And that’s it. You have successfully created a new OrbitDB data store on one machine and replicated across another.

Let’s test it out. Back on your first machine, add another entry to the feed data store:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Learning about IPFS"

On your second machine, retrieve the feed list again:

curl http://localhost:3000/db/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw%2Fmy-feed/all

["A beginner's guide to OrbitDB REST API","Learning about IPFS"]

Adding data in a decentralized environment

What happens if you want to add more entries to the my-feed data store from your second machine:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Adding an item from the second OrbitDB REST peer."
{"statusCode":500,"error":"Internal Server Error","message":"Error: Could not append entry, key \"03cc598325319e6c07594b50880747604d17e2be36ba8774cd2ccce44e125da236\" is not allowed to write to the log"}

If you check the output from your REST server you will see a permissions error. By default, any replicating node will not be able to write back to the data store. Instead, we have tell the originating OrbitDB instance that the second instance can also write to the my-feed data store. To do this, we must manually add the public key of the second OrbitDB instance to the first instance.

It is important to note that the data store must be created with an access controller pre-specified. Start by deleting the data store on the first machine:

curl -X DELETE http://localhost:3000/db/my-feed

We must now set up the my-feed database again and add some data:

curl -X POST http://localhost:3000/db/feed.new -H "Content-Type: application/json" --data '{"create":"true","type":"feed","accessController":{"type": "orbitdb","write": ["048161d9685991dc87f3e049aa04b1da461fdc5f8a280ed6234fa41c0f9bc98a1ce91f07494584a45b97160ac818e100a6b27777e0b1b09e6ba4795dcc797a6d8b"]}}'

Note the accessController property; this specify the controller type and the key which can write to the database. In this case it is the first machine’s public key, which can be retrieved by running:

curl http://localhost:3000/identity

On the second machine, retrieve the public key:

curl http://localhost:3000/identity

Grab the publicKey value. We will now enable write access to the my-feed database:

curl -X PUT http://localhost:3000/db/feed.new/access/write --data 'publicKey=04072d1bdd0e5e43d9e10619d997f6293f4759959e19effb958785b7f08413fb81501496a043385c245dedc952ee01c06bc9c654afe79b11dd5f130796baf8d2da'

publicKey will be the publicKey of the second machine. We must execute this request from the first machine because only the first machine currently has write permissions to the data store.

With the second machine’s publickey added, we can go ahead and add a new my-feed from the second machine:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Adding an item from the second OrbitDB REST peer."

Conclusion

This brief introduction to the new OrbitDB HTTP API will hopefully provide some insights into how OrbitDB functions and will hopefully highlight some of the benefits a distributed database system brings to the decentralized web.

We have only scratched the surface of what is possible with OrbitDB. You could go ahead and add other machines to my-feed’s write access controller or create different data stores for storing data in different formats. Also the HTTP API is only in its infancy and there are a number of new features being actively developed.

This new chapter in OrbitDB’s brief history is going to bring a lot of new development and providing access to other languages will expand its usability.