You can translate the document:


Virtual DataPort incorporates a system called cache module that stores local copies of the data retrieved from the data sources, in a JDBC database. When a user executes a query that involves a view with cache enabled, the Server checks if it can retrieve the data from the cache. If not, it retrieves the data from the data source and stores it in the cache.


When the cache engine caches data, it executes an INSERT statement in the cache’s database, for each row it has to store in the cache. These INSERT statements are executed in batches to speed up the process of loading the cache. The field Default Batch insert size, available on the cache configuration panel, sets the number of INSERT queries per batch.


A Scheduler job can also insert data into a relational database table if configured with a JDBC exporter. The JDBC exporters allow the user to export data to a database table. There are two important options to check out when exporting data from Scheduler:

  • Update tuple if entry already exists in DB: the JDBC exporter will update the tuple instead of inserting it.
  • Batch size: the size of the chunk of documents received by the exporter to be inserted in the database as a batch (using JDBC Batch Inserts). This option only takes effect if the “Update tuple if entry already exists in DB” option is not checked.

Both “batch size” parameters for the cache and Scheduler can be tuned (200 by default) in order to speed up the data insertion. A bigger value needs less accesses to the database but results in a larger memory footprint without a corresponding increase in performance. Very large batches usually result in a decline in performance compared to smaller batches.


MySQL configuration for batch inserts


Ask a question

You must sign in to ask a question. If you do not have an account, you can register here