# Import Database

### Nodes can revalidate an existing database by importing it and initializing with designated flags.

#### Blockchain State Verification and Data Indexing

Checking the blockchain state at a specific point in time, ensuring software compatibility, validating blockchain integrity, and indexing data from genesis to the present are critical tasks for maintaining a secure and efficient blockchain system. This guide covers how to:

* **Check Blockchain State at a Specific Time**: This involves using APIs or direct database access to examine the state of the blockchain at a given block number (e.g., block 255255). For example, one could import the database module and configure the node to pause at the block of interest.
* **Ensure Backwards Compatibility with New Software Versions**: It's crucial to verify that updates or new versions of the blockchain software do not introduce compatibility issues with existing data and functionalities.
* **Validate Blockchain State**: Regularly validating the integrity and consistency of the blockchain state helps in identifying and rectifying discrepancies early.
* **Index Data in Elasticsearch**: For enhanced query capabilities and analytics, you can index all the blockchain data, from the genesis block to the latest, into a search engine like Elasticsearch.

### How to start the process[​](https://docs.multiversx.com/validators/import-db#how-to-start-the-process) <a href="#how-to-start-the-process" id="how-to-start-the-process"></a>

Let's suppose we have the following data structure:

```
  ~/of-chain-go/cmd/node
```

The `node` binary is in the specified path. We also have a database built from a continuously syncing observer with the chain from genesis without shard switching.. This database will be placed in a directory, let's presume we will place it near the node's binary, yielding a data structure as follows:

```
.
├── config
│    ├── api.toml
│    ├── config.toml
│    ...
├── import-db
│    └── db
│        └── 1
│            ├── Epoch_0
│            │     └── Shard_1
│            │         ├── BlockHeaders
│            │         │   ...
│            │         ├── BootstrapData
│            │         │   ...
│            │         ...
│            └── Static
│                  └── Shard_1
│                      ...
├── node
```

Ensure the `db` directory is a subdirectory of `import-db`. Verify that the `config` directory, especially the `prefs.toml` file, matches the original node's configuration.

{% hint style="danger" %}
WARNING

Before you begin the import-db process, ensure that the `/of-chain-go/cmd/node/db` directory is completely empty. This will allow the import to start from the genesis block and proceed up to the last available epoch.
{% endhint %}

Next, the node can be started by using:

```
 cd ~/of-chain-go/cmd/node
 ./node -use-log-view -log-level *:INFO -import-db ./import-db
```

{% hint style="info" %}
NOTE

The `-import-db` flag designates the path to the source database directory. In the given example, it's assumed the directory is named `import-db` and is situated close to the `node` executable.
{% endhint %}

The node will start the reprocessing of the provided database. It will end with a message like:

```
import ended because data from epochs [x] or [y] does not exist
```

{% hint style="info" %}
NOTE

To accelerate the import-db process, omit checking block header signatures when using data from a trusted source. Add the `-import-db-no-sig-check` flag upon starting the node, alongside previously mentioned flags.
{% endhint %}

### Import-DB with populating an Elasticsearch cluster[​](https://docs.multiversx.com/validators/import-db#import-db-with-populating-an-elasticsearch-cluster) <a href="#import-db-with-populating-an-elasticsearch-cluster" id="import-db-with-populating-an-elasticsearch-cluster"></a>

Utilizing the `import-db` mechanism allows for the efficient population of an Elasticsearch cluster by re-processing data through this process.

{% hint style="info" %}
NOTE

Import-DB for populating an Elasticsearch cluster should be used only for a full setup (a node in each Shard + a Metachain node)
{% endhint %}

* To prepare, update the `external.toml` file on each node.&#x20;
* Use Import-DB only for full setups. &#x20;
* Nodes will push re-processed data to the Elasticsearch cluster if configured correctly.

{% hint style="warning" %}
**More details will be released when the testnet phase starts.**
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.onefinity.network/technology/run-a-onefinity-node/nodes/import-database.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
