You can translate the document:

Goal

This document describes how to connect to Azure Databricks from Denodo Virtual DataPort.

Content

Azure Databricks is an Apache Spark-based analytics platform optimized for Microsoft Azure cloud services.

Connecting to Azure Databricks from Denodo

  • From the Denodo Design Studio, create a new JDBC data source by selecting Virtual Database > New > Data source > JDBC. This will open the wizard to create a connection to JDBC data source.

 

  • Provide the Name for the JDBC Data source.

  • Select the respective Database adapter. In this example, we are using the Databricks Adapter.
  • Since Denodo 8.0 update 20220815, the default Driver classpath is databricks-2. The driver is not included in this class path, so we recommend downloading and installing the latest driver.
  • Next, provide the  Database URI.  
    Format : ‘jdbc:databricks://<host_name>:<port>/<database_name>;, also, some parameters are added to the URI after the database name like httpPath, SSL etc. To know more about the parameters refer to Configure the Databricks ODBC and JDBC drivers.
  • If you are using a Databricks driver version < 2.6.25, you may need to change the Driver class to ‘com.simba.spark.jdbc.Driver’ and the Database URI to ‘jdbc:spark://<host_name>:<port>/<database_name>;
  • You can select what type of Transaction Isolation you want from the options provided. In this example, we use the ‘Database Default’.
  •   Authentication: There are three options provided. In our example, we are using the ‘Use login and password’ using a personal access token.
  •   In the User, enter ‘token’.
  •   In the Password, provide the access key token.
  •  After providing the connection, you can test the connection by clicking on the Test     Connection button.
  • Save the data source.
  • Once saved, click on the ‘Create base view’ tab to introspect source metadata available through the Data Source.
  • To incorporate some of the tables into the Denodo virtual schema, you have to check the box near the tables or views you want to import and then click ‘Create selected’.

  • Click ‘Save’ to save the base view

  • You can also use the ‘Create from query’ option if you wish to custom your base view.

References

Virtual DataPort Administration Guide -  JDBC Sources

Virtual DataPort Administration Guide - Supported JDBC Data Sources

Azure Databricks Documentation - Configure JDBC/ODBC connection

Questions

Ask a question

You must sign in to ask a question. If you do not have an account, you can register here