The Decentralized Archive Journey Begins

At KnowledgeArc.Network, we believe that the publishing, dissemination and archiving or information needs to fundamentally change.

Information should be open and public. It should also incentivize a decentralized community to participate in the creation, review, licensing, verification and archiving of information.

A democratized ecosystem for peer review

A single entity should not control and decide what quality content can or cannot be peer reviewed and published. Large, well-funded institutions should not receive preferential treatment over smaller, less-funded ones. Instead, we believe the entire community can actively participate in the review and publishing process. The community can decide inclusion of a work based on its merits rather than the size of an institution’s reach and influence.

Your data held for ransom

The convenience of a third-party hosting provider can often mean you give up control of your data. If you decide to change hosts or move information to in-house infrastructure, you are reliant on your existing host to hand over all your data. Depending on your agreement with your host, it may not be possible to salvage it all.

KnowledgeArc.Network uses decentralized technologies to store, sign and verify your archived information. An archiving provider can no longer hold your data exclusively; you and others can replicate your data, even if it is private, whether it is to another hosting provider, an in-house server or even your local computer.

Multiple versions of the data also ensures there isn’t a single point of failure.

Incentivizing the community

Current solutions incentivize and reward middlemen, but it is the authors, reviewers, end-users and developers who create all of the information from which these middlemen profit.

KnowledgeArc.Network aims to incentivize the community and revenue will go directly to the participants of ecosystem. Citations and licensing will flow directly to the creators of works archived to the ecosystem through the use of automated agreements (smart contracts). Community members will conduct peer review, with smart contracts providing remuneration directly. Developers will have access to the entire system and will be able to create tools and processes which directly benefit all users. And users will be able to directly reward content creators for their contribution to the ecosystem.

Alternative statistics and metrics could even result in additional earnings for content creators as impact factor is monetized.

KnowledgeArc.Network whitepaper

We distilled our vision into our official whitepaper which is available for download.

Active development

The whitepaper is not the start of our development cycle. KnowledgeArc.Network has been in development for 2 years and momentum is growing.

We are integrating various technologies with our archiving platform and ecosystem and cultivating partnerships with other blockchain systems which we have identified as key to the evolution of the KnowledgeArc.Network ecosystem.

Tokenomics

The utility token, Archive (ARCH) powers the KnowledgeArc.Network for transactions within the decentralized ecosystem.

Community members participating in the ecosystem will be able to directly earn tokens; authors will earn through citations and licensing, peer reviewers through verifying the authenticity of works, developers by extending functionality and providing customizations and resource providers by providing solutions such as backups and hosting applications.

We are working on ways to make using Archive as easy as possible and are incentivizing key archiving players to embrace KnowledgeArc.Network and blockchain technologies to replace redundant solutions and methodologies.

Self-signed certificates with local root CA

This tutorial briefly covers the creating and trusting your own certificate authority (CA) for issuing self-signed SSL certificates, and is designed to work with OribitDB’s new REST API HTTP/2 push services.

This tutorial is aimed at Unix-based systems, in particular Ubuntu and other Debian-based Linux distributions so you will need to modify the commands for your own platform. All code examples are intended to be copied and pasted directly to the command line and will generate certificates in your current working directory.

To get started, we are going to create a root certificate which we will use to sign additional SSL certificates with.

First, create your root CA private key:

openssl genrsa -des3 -out rootSSL.key 2048
Generating RSA private key, 2048 bit long modulus
………………+++
………………………………………………………………………+++
e is 65537 (0x010001)
Enter pass phrase for rootSSL.key:

You will be prompted for a password. Be sure to specify one that is long enough as you may encounter errors if your password is too short.

Next, use your CA private key to create a root certificate:

openssl req -x509 -new -nodes -key rootSSL.key -sha256 -days 1024 -out rootSSL.pem

Once launched, you will need to re-enter the password you assigned to your private key:

Enter pass phrase for rootSSL.key:

If successful, provide information about your certificate:

 You are about to be asked to enter information that will be incorporated
 into your certificate request.
 What you are about to enter is what is called a Distinguished Name or a DN.
 There are quite a few fields but you can leave some blank
 For some fields there will be a default value,
 If you enter '.', the field will be left blank.
 Country Name (2 letter code) [AU]:
 State or Province Name (full name) [Some-State]:WA
 Locality Name (eg, city) []:
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:
 Organizational Unit Name (eg, section) []:
 Common Name (e.g. server FQDN or YOUR name) []:localhost
 Email Address []:

You are now ready to install the new CA certificate into your CA trust store. The following commands will copy the root certificate into Ubuntu’s CA store so you may need to modify the paths if you are on a different distribution or OS platform:

sudo mkdir /usr/local/share/ca-certificates/extra
sudo cp rootSSL.pem \/usr/local/share/ca-certificates/extra/rootSSL.crt
sudo update-ca-certificates

Now it is time to generate a certificate for your development environment. Create a private key for your new certificate:

openssl req \
 -new -sha256 -nodes \
 -out localhost.csr \
 -newkey rsa:2048 -keyout localhost.key \
 -subj "/C=AU/ST=WA/L=City/O=Organization/OU=OrganizationUnit/CN=localhost/emailAddress=demo@example.com"

Next, create the certificate, signing it with your Root CA:

openssl x509 \
 -req \
 -in localhost.csr \
 -CA rootSSL.pem -CAkey rootSSL.key -CAcreateserial \
 -out localhost.crt \
 -days 500 \
 -sha256 \
 -extfile <(echo " \
    [ v3_ca ]\n \
    authorityKeyIdentifier=keyid,issuer\n \
    basicConstraints=CA:FALSE\n \
    keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment\n \
    subjectAltName=DNS:localhost \
   ")

Your SSL certificate is now ready for use. To use it with OrbitDB’s REST API, launch the cli.js script with the flags –https-key and –https-cert, using the new localhost.key and localhost.crt files we just created:

node src/cli.js api --ipfs-host localhost --orbitdb-dir ./orbitdb --https-cert ./localhost.crt --https-key ./localhost.key

The certificates should validate against your Root CA when used with tools such as curl:

curl -vs --http2 -X POST https://localhost:3000/db/my-feed --data 'create=true' --data 'type=feed'
 successfully set certificate verify locations:
 CAfile: /etc/ssl/certs/ca-certificates.crt
 CApath: /etc/ssl/certs
 ...
 SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
 ALPN, server accepted to use h2
 Server certificate:
 subject: C=AU; ST=WA; L=Ellenbrook; O=Organization; OU=OrganizationUnit; CN=localhost; emailAddress=demo@example.com
 start date: May 25 14:56:35 2019 GMT
 expire date: Oct  6 14:56:35 2020 GMT
 common name: localhost (matched)
 issuer: C=AU; ST=WA; L=Ellenbrook; O=Internet Widgits Pty Ltd; CN=Local Certificate
 SSL certificate verify ok. 

In the above, you can see the CA being loaded from the correct location (/etc/ssl/certs) and details about the certificate (Server certificate:).

You can now successfully run the new OrbitDB HTTP API with self-signed certificates on your development environment.

References

How to get HTTPS working in localhost development environment, secureend.com, https://reactpaths.com/how-to-get-https-working-in-localhost-development-environment-f17de34af046

OrbitDB HTTP API – A Practical Guide

OrbitDB is the distributed, p2p database system which will revolutionize the way we store, replicate, and disseminate information and will become the cornerstone of any dApp which requires data storage.

Too much centralization has put control of the internet into the hands of a few. Web3 aims to decentralize the internet, providing a more democratized, distributed ecosystem.

Most of the hype is around cryptocurrencies and the freedoms they will bring, cutting out the middleman and putting control back in the hands of the people. However, there are a number of less “high-profile” but equally game changing projects which will reshape the internet as we know it.

This how-to briefly covers the basics of how to create an OrbitDB database and store data as well as introduce the most powerful feature of OrbitDB; replicating the data across multiple locations.

OrbitDB, the decentralized database

OrbitDB is a decentralized database system which uses IPFS for distributing stored data via P2P technologies. Storing data in OrbitDB ensures high availability and low latency due to the nature of distributed architectures such as IPFS.

Originally, OrbitDB was available as a Node.js library, so usage was limited to Node-based applications. However, with the recent release of the OrbitDB REST API, any language which supports REST calls can leverage this distributed database.

Setting up

Running an OrbitDB REST server is relatively straight-forward but some knowledge or working on the command line will be required. These steps assume you are running Linux or some other Unix-based operating system. For Windows users, you will need to translate the commands to your environment.

Prerequisites

Firstly, this guide assumes you can use a command line and install software. You don’t need to know node.js or how peer-to-peer systems work but you will need to be able to execute commands in a terminal. In this guide, all commands will be run from the terminal and will be represented like so:

type commands at the command line

You will also need two machines running since we will be replicating a decentralized database. This can either be two physical computers, a couple of virtual machines or docker containers.

Lastly, because the OrbitDB server uses Node.js you will also need npm (bundled with Node.js) to install the dependencies. This tutorial will not cover the installation and configuration of these requirements.

Running IPFS

OrbitDB uses IPFS to distribute and replicate data stores. The OrbitDB HTTP server runs in one of two modes; local or api.

When run in Local mode, OrbitDB will run its own IPFS node. When run in api mode, OrbitDB will connect to an already-running IPFS node.

For this tutorial we will connect to a running IPFS daemon and will assume you already have this installed. You will also want to run IPFS daemon with pubsub enabled.

Start your first IPFS daemon by running:

ipfs daemon --enable-pubsub-experiment

Building the REST server

Now get a copy of the code. You can grab it via Github at https://github.com/orbitdb/orbit-db-http-api:

wget https://github.com/orbitdb/orbit-db-http-api.zip

Alternatively, you can clone the git repo:

git clone https://github.com/orbitdb/orbit-db-http-api.git

Install your dependencies:

npm install

Setting up the SSL certificates

The latest version of the OrbitDB HTTP API incorporates HTTP/2. Therefore, to run the server, you will need to generate SSL certificates.

There are a couple of options available for obtaining certificates; you can issue a certificate using a certificate authority such as Let’s Encrypt, or, you can become your own CA. For development environments, the second option may be better and a thorough overview on how to do this is covered by the tutorial Self-signed certificates with local root CA.

The rest of this guide will assume you have a trusted SSL certificate set up and that curl will use your trust store to validate the certificate. If not, you will need to tell curl to ignore the certificate verification by passing the -k flag:

curl -k -X GET ...

Up and Running

Starting the HTTP API server

Start up the OrbitDB server and connect to your running ipfs:

node src/cli.js api --ipfs-host localhost --orbitdb-dir ./orbitdb --https-key localhost.key --https-cert localhost.crt

The –https-key and –https-cert options above assume you are using the certificate and key generated from the tutorial Self-signed certificates with local root CA. If not, replace with your own certificate and key.

Consuming our first request

The REST server is now running. You can test this by running something simple (we are going to use cURL to run the rest of these command so make sure you have it installed):

curl -X GET http://localhost:3000/identity

This will return a JSON string representing your OrbitDB node’s identity information. This includes your public key (which we will use later).

Create a database

Creating a data store is very easy with the REST API and you can launch a store based on any of the supported types. For example, you can create a feed data store by running:

curl -X POST http://localhost:3000/db/my-feed --data 'create=true' --data 'type=feed'

You can also use JSON to specify the initial data store features:

curl -X POST http://localhost:3000/db/my-feed -H "Content-Type: application/json" --data '{"create":"true","type":"feed"}'

Add some data

Let’s add some data to our feed:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "A beginner's guide to OrbitDB REST API"

And viewing the data we have just added:

curl -X GET  http://localhost:3000/db/my-feed/all

["A beginner's guide to OrbitDB REST API"]

Be aware that there are two different endpoints for sending data to the store, and which endpoint you use will depend on the store’s type. For example you will need to call /put when adding data to a docstore.

Replicating

Replicating is where the real power of distribution lies with OrbitDB. Replication is as simple as running an OrbitDB REST node on another machine.

Assuming you have a second computer which is accessible over your intranet or via Docker or a virtual machine, you can replicate the my-feed feed data store.

Getting ready to replicate

Before you replicate your feed data store, you will need to make a note of its address. You can do this by querying the data store’s details:

curl http://localhost:3000/db/my-feed

{"address":{"root":"zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw","path":"my-feed"},"dbname":"my-feed","id":"/orbitdb/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw/my-feed","options":{"create":"true","localOnly":false,"maxHistory":-1,"overwrite":true,"replicate":true},"type":"feed","capabilities":["add","get","iterator","remove"]}

Copy the id. We’re going to use it in the next step.

Running another copy of the data store

On your second machine, make sure you have IPFS running and the OrbitDB REST server installed and running.

Replicating the my-feed data simply requires you query the first machine’s my-feed data store using the full address. Using the address of the my-feed data store I queried earlier, request the data:

curl http://localhost:3000/db/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw%2Fmy-feed/all

["A beginner's guide to OrbitDB REST API"]

You may need to run the curl call a couple of time; OrbitDB will take a small amount of time to replicate the data over.

There are two important things to note about the address; 1) we drop the /orbitdb/ prefix and 2) we need to url encode the /. The html encoded value of / is %2F.

And that’s it. You have successfully created a new OrbitDB data store on one machine and replicated across another.

Let’s test it out. Back on your first machine, add another entry to the feed data store:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Learning about IPFS"

On your second machine, retrieve the feed list again:

curl http://localhost:3000/db/zdpuAzCDGmFKdZuwQzCZEgNGV9JT1kqt1NxCZtgMb4ZB4xijw%2Fmy-feed/all

["A beginner's guide to OrbitDB REST API","Learning about IPFS"]

Adding data in a decentralized environment

What happens if you want to add more entries to the my-feed data store from your second machine:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Adding an item from the second OrbitDB REST peer."
{"statusCode":500,"error":"Internal Server Error","message":"Error: Could not append entry, key \"03cc598325319e6c07594b50880747604d17e2be36ba8774cd2ccce44e125da236\" is not allowed to write to the log"}

If you check the output from your REST server you will see a permissions error. By default, any replicating node will not be able to write back to the data store. Instead, we have tell the originating OrbitDB instance that the second instance can also write to the my-feed data store. To do this, we must manually add the public key of the second OrbitDB instance to the first instance.

It is important to note that the data store must be created with an access controller pre-specified. Start by deleting the data store on the first machine:

curl -X DELETE http://localhost:3000/db/my-feed

We must now set up the my-feed database again and add some data:

curl -X POST http://localhost:3000/db/feed.new -H "Content-Type: application/json" --data '{"create":"true","type":"feed","accessController":{"type": "orbitdb","write": ["048161d9685991dc87f3e049aa04b1da461fdc5f8a280ed6234fa41c0f9bc98a1ce91f07494584a45b97160ac818e100a6b27777e0b1b09e6ba4795dcc797a6d8b"]}}'

Note the accessController property; this specify the controller type and the key which can write to the database. In this case it is the first machine’s public key, which can be retrieved by running:

curl http://localhost:3000/identity

On the second machine, retrieve the public key:

curl http://localhost:3000/identity

Grab the publicKey value. We will now enable write access to the my-feed database:

curl -X PUT http://localhost:3000/db/feed.new/access/write --data 'publicKey=04072d1bdd0e5e43d9e10619d997f6293f4759959e19effb958785b7f08413fb81501496a043385c245dedc952ee01c06bc9c654afe79b11dd5f130796baf8d2da'

publicKey will be the publicKey of the second machine. We must execute this request from the first machine because only the first machine currently has write permissions to the data store.

With the second machine’s publickey added, we can go ahead and add a new my-feed from the second machine:

curl -X POST http://localhost:3000/db/my-feed/add --data-urlencode "Adding an item from the second OrbitDB REST peer."

Conclusion

This brief introduction to the new OrbitDB HTTP API will hopefully provide some insights into how OrbitDB functions and will hopefully highlight some of the benefits a distributed database system brings to the decentralized web.

We have only scratched the surface of what is possible with OrbitDB. You could go ahead and add other machines to my-feed’s write access controller or create different data stores for storing data in different formats. Also the HTTP API is only in its infancy and there are a number of new features being actively developed.

This new chapter in OrbitDB’s brief history is going to bring a lot of new development and providing access to other languages will expand its usability.

Decentralizing Attribution Using Po.et

Successful management of copyright is integral to an academic repository. Attribution, citation and licensing all depends on clear terms of use as outlined in an archived item’s metadata. Decentralizing attribution using po.et and blockchain technologies is an effective method to achieve this.

Current Implementations use centralized methods to store copyright and licensing terms and these terms can be easily changed or manipulated at any time. What is needed is an trusted, immutable ledger of timestamped items, stored in a way that is accessible to all.

Read More

Decentralizing DSpace Backups using IPFS

With the Archive token (ARCH) deployed and https://knowledgearc.io live, the team is now focussed on implementing our decentralized archiving ecosystem.

In the beginning…

KnowledgeArc.Network is already under development. We have spent the past year tokenizing various academic material. Using a combination of ERC721 non-fungible tokens, IPFS and OrbitDB, we have been developing an open-source, distributed archiving ecosystem that can be implemented by anyone. Using our Archive token, community members will be incentivized to participate in the creation of a truly open, decentralized academic platform.

Read More

Archive Token Launch

We have been quietly launching our new utility token, Archive (ARCH). Designed as method of exchange across KnowledgeArc.Network, Archive will be used to power the archiving ecosystem.

Roadmap

We have been working on the implementation of our Archive token for almost 12 months now. We have identified various use-cases which we have grouped into short, medium and long term goals.

Read More