Preparing the Connection to Data Sources

After installing the Denodo Platform, before creating a connection to a source, verify that you meet these requirement.

Preparing the Connection to Databases

For each the databases from which you plan to obtain data, you need to do this:

  1. Check if the Denodo Platform includes the JDBC driver to connect to this database.

    The appendix Supported JDBC Data Sources of the Administration Guide lists the databases supported and if its driver is included or not.

    If the driver is not included, once you obtain it, use the Extensions management wizard of the administration tool to upload the jar files of the driver. The section Importing a JDBC Driver of the Administration Guide explains how to do it.

  2. Obtain a service account with at least READ privileges over the tables and views that you want to query. This user account will be used at least, while importing these elements into Virtual DataPort.

  3. If this database is going to be a target of a Data Movement, this account also requires the privilege to create and delete tables, and execute INSERT, UPDATE and DELETE statements on these tables.

Cache Database

Virtual DataPort includes a Cache Engine that can store a copy of the data retrieved from the data sources, in a JDBC database. Its main goal is to increase the performance of queries.

We recommend creating a catalog or schema in the database specifically for the Cache Engine of Denodo. The Cache Engine creates many tables (one for each view of Denodo where the cache is enabled). Therefore, it is useful for these tables to be isolated from the rest of the elements of that database.

This new catalog or schema has to be created with these options:

  • Support for multi-byte characters (e.g. UTF-8). This will allow to store data that contains multi-byte characters (e.g. Japanese characters). Also, to be able to enable cache for a view whose name or fields has multi-byte characters.

  • Binary collation. In a database management system, the collation specifies how the database compares and sorts character strings. One of the main effects of the collation is the order of the results in queries that have an ORDER BY with columns that are character strings. For example, the collation determines whether or not uppercase letters and lowercase letters are treated as the same, sensitivity to accents (e.g. is “A” the same as “Á”), etc.

    With binary collations, the data is ordered according to the numeric value of each byte of the character strings.

Important

Validate with the administrator of this database that the collation of this catalog/schema is binary.

The Execution Engine of Virtual DataPort expects the data to be sorted following the rules of a binary collation. If the collation is not binary, queries that execute JOIN operations of data obtained from the cache database and from another database may return incorrect results.

Grant the following privileges to the user account that Denodo will use to connect to this database:

  • Privileges to create and drop tables on this schema.

  • Privileges to execute SELECT, INSERT, UPDATE and DELETE statements on these tables.


When you enable the Cache Engine for the first time, the default database to store the cache data is an Apache Derby database that is embedded with Denodo.

Important

We advise against using this embedded Apache Derby database. This database is provided to show case the Cache Engine and doing small projects. Instead, use an external database, especially on production environments.

See more about this in the section Cache Module of the Administration Guide. It explains how the cache module works and lists the databases that Virtual DataPort can use to store cached data.

Grant Privileges in SAP for BAPI Data Sources

If you are going to query SAP executing BAPIs (i.e. create BAPI data sources), you have to grant access to the following functions, via the authorization object S_RFC, to the user account used by Virtual DataPort to connect to SAP:

  • RFCPING

  • RFC_GET_FUNCTION_INTERFACE

  • DDIF_FIELDINFO_GET

  • SYSTEM_RESET_RFC_SERVER (executed after running the BAPI of a base view)

In addition, grant access to the BAPI invoked by each BAPI base view you create.

Grant Privileges in SAP BW for Multidimensional Data Sources

Usually, SAP systems are configured to limit the functions a user can invoke.

If you are going to query SAP BW (i.e. create multidimensional data sources), you have to grant access to the following functions, via the authorization object S_RFC, to the user account used by Virtual DataPort to connect to SAP BW:

  • RFCPING

  • RFC_GET_FUNCTION_INTERFACE

  • DDIF_FIELDINFO_GET

  • SYSTEM_RESET_RFC_SERVER

Virtual DataPort invokes these BAPIs at introspection time (when opening the data source to list the SAP BW cubes):

  • BAPI_MDPROVIDER_GET_CUBES

  • BAPI_MDPROVIDER_GET_VARIABLES

  • BAPI_MDPROVIDER_GET_MEASURES

  • BAPI_MDPROVIDER_GET_DIMENSIONS

  • BAPI_MDPROVIDER_GET_LEVELS

  • BAPI_MDPROVIDER_GET_PROPERTIES

  • BAPI_MDPROVIDER_GET_HIERARCHYS

  • RSOBJS_GET_NODES_X

When querying views that involve a multidimensional data source with the “SAP BW 3.x (BAPI)” adapter, it invokes these:

  • BAPI_MDDATASET_CREATE_OBJECT

  • BAPI_MDDATASET_GET_AXIS_INFO

  • BAPI_MDDATASET_GET_AXIS_DATA

  • BAPI_MDDATASET_GET_CELL_DATA

  • BAPI_MDDATASET_SELECT_DATA

  • BAPI_MDDATASET_DELETE_OBJECT

  • BAPI_MDPROVIDER_GET_MEMBERS

When querying views that involve a multidimensional data source with the “SAP BI 7.x (BAPI)” adapter, it invokes these:

  • RSR_MDX_CREATE_OBJECT

  • RSR_MDX_GET_AXIS_INFO

  • RSR_MDX_GET_AXIS_DATA

  • RSR_MDX_GET_CELL_DATA

  • BAPI_MDDATASET_SELECT_DATA

  • BAPI_MDDATASET_DELETE_OBJECT

  • BAPI_MDPROVIDER_GET_MEMBERS