Identifying academic content using the blockchain

The KnowledgeArc.Network blockchain developers have been working hard over the summer. Here is some news on what they have been working on, including academic content on the blockchain.

Identifiers and Academic Content

One of the ways we believe the blockchain can add real value to the scientific process is to have stable, permanent and open identifiers. Therefore, we are working on how to implement stable identifiers, like author ids or persistent identifiers for academic content.

Currently, if your 3rd party identification provider stops working, introduces a bug or simply decides they don’t want to be in business any more, your identifiers will be lost forever. Moving identifiers to the blockchain ensures true permanency and full ownership by you.

Read more about our blockchain development in our blog.

You can also check out our whitepaper which we launched in June.

June 2019 in review

June was an important month in the evolution of KnowledgeArc.Network. We review some of the highlights from the month.

Whitepaper

We released our whitepaper early in June. This was an important step; even though we had been developing features and software for over two years, the whitepaper captured the reason behind KnowledgeArc.Network and distilled what our ecosystem is all about at a higher level.

Deploying our whitepaper to IPFS also highlighted our commitment to distributed technologies.

Exchange Listings

We’re committed to decentralization, distribution and democracy. Therefore, we are excited to see our cryptocurrency, Archive (ARCH), listed on two decentralized exchanges; SwitchDex and Ethermium.

We hope this will make it easier for our community to obtain Archive for ongoing development in the KnowledgeArc.Network ecosystem.

OrbitDB

It’s important for decentralized applications to move forward, and to be actively developed and supported. However, with dApps and other distributed applications being nascent technologies, not all of the underlying architecture is ready for production. As is often the case, software is still going through active development and requires a lot of resources to get it to a stable, production-ready state. This can mean that projects look stagnant even though developers are hard at work on various, related projects.

KnowledgeArc.Network is using IPFS as the underlying storage mechanism. This includes OrbitDB, a decentralized, peer-to-peer database system, which uses IPFS for replication. OrbitDB is a powerful technology and will be one of the cornerstones of the new Web3, similar to what MySQL did for the Internet v1.

OrbitDB will be KnowledgeArc.Network’s decentralized storage layer, storing metadata and other supporting information. The ecosystem will be able to replicate these OrbitDB data stores as well as combine them to form larger databases.

OrbitDB is under active development. That is why we have contributed time and resources to assist with the success of this project. Some of our work includes co-contributing to the HTTP API and field manual as well as maintaining the Go implementation of OrbitDB.

The KnowledgeArc.Network Working Group

We have started a working group, a place for advisors and experts to discuss ways to decentralize archiving, peer review and journalling.

During June, we invited some project managers and librarians who work in the archiving space to join our working group and we welcome these new members. We hope to expand this group of experts and look forward to seeing what insights they can provide to this new ecosystem.

Taking back ownership of your data

The convenience of hosted solutions for digital assets and archiving can hide a major problem; do you control the data you own? KnowledgeArc.Network’s decentralized architecture ensures you are in full control of your data.

Do you really own your data?

Hosting digital assets in the cloud has become a popular and cost-effective solution. But what happens when you decide the host you are with is no longer providing the level of service you expect?

You may think migration is as simple as your existing host dumping the data out to a backup file and making it available for your new provider to restore. Unfortunately, the reality isn’t that simple; closed source applications often have proprietary formats which make them difficult or even impossible to import into other systems.

On the other hand, some open source systems are customized, but the customizations might not be publicly available, so backups only capture a subset of your data. For example, there are archive hosting providers who have built multi-tenant data storage on top of a single application. Databases in such a system cannot simply be lifted and re-implemented on other infrastructure. This results in broken features and crucial data being excluded from the system.

Even if migrating from one system to another runs smoothly, complex backups and time-consuming debugging are often required. Export/import tools need constant maintenance, but with niche products such as digital asset systems, maintenance of these ancillary tools can often be ignored.

A distributed solution

The KnowledgeArc.Network platform makes centralized storage obsolete. Data is replicated in multiple locations whilst still being owned of the original creator.

Replication allows application managers, developers and system administrators to build a variety of user experiences on top of the data. There is no need to set up complex data structures, import and export data, or work around missing data. Instead, the user simply replicates an existing database and works directly on top of it.

Data can also remain private even though it is stored in a public way. By encrypting data, the owner is the only one with access to this information and can grant other users varying degrees of control. For example, perhaps certain users might only be able to read data. Others might be able to update existing data but not delete it.

Centralized vs decentralized

Recently there has been a move to more centralized archiving solutions. Instead of disparate systems talking to one another or federated systems being established to support a “go-to” repository of information, a number of governments and bureaucracies are pushing for everything to be centralized. This results in a stagnation of innovation and, more importantly, a single point of failure.

Figure 1: Legacy Archives

KnowledgeArc.Network decentralized databases will capture the best of both worlds; every archive is unique but their records can easily be merged into a single, federated archive. This federated archive can then be replicated further so that multiple user interfaces can be created on top of the same data.

KnowledgeArc.Network captures the best of every model. Decentralized, independent databases provide institutions with full control and ownership of their data. Federated archives simply merge distributed databases into a single data store. And, finally, the entire community can build their own user experiences on top of any archived data by simply replicating an existing database.

Figure 2: Decentralized Archive

Decentralizing DSpace Backups using IPFS

With the Archive token (ARCH) deployed and https://knowledgearc.io live, the team is now focussed on implementing our decentralized archiving ecosystem.

In the beginning…

KnowledgeArc.Network is already under development. We have spent the past year tokenizing various academic material. Using a combination of ERC721 non-fungible tokens, IPFS and OrbitDB, we have been developing an open-source, distributed archiving ecosystem that can be implemented by anyone. Using our Archive token, community members will be incentivized to participate in the creation of a truly open, decentralized academic platform.

Read More