You can translate the question and the replies:

Denodo dialect support on Spark 3.3.0

Hi, I'm trying to connect to Denodo and run a vql (not a view/table export) using denodo-vdp-jdbcdriver.jar in Spark cluster on AWS Glue. The job used to run fine on Spark 2.4.3 but not on Spark 3.3.0. I use pyspark to run the job and I wonder how the dialect of denodo has to be registered if I run pyspark code. I referred to this documentation ( and it supports mostly java and scala but not pyspark. Any suggestions? The spark job just fails without any reason after making a connection to denodo. Thanks in advance!
14-02-2024 07:46:46 -0500

1 Answer

Hi, As mentioned in the document, Denodo dialect for SparkSQL is compatible with versions 2.4 and 3.0 of Apache Spark at this moment. So, I would recommend using Spark version 2.4 and 3.0 for running the job without errors. SparkSQL is a spark module that is used for structured data processing. Please follow [Dialect for Spark SQL section]( of the User manual for detailed steps related to installation and execution of SparkSQL module. There are several Python libraries which help to **establish connections with Python and Denodo**. Please take a look at [Connect Denodo and Python]( Knowledge base article which gives detailed information about the connection details to connect Denodo using Python modules. Hope this helps!
Denodo Team
16-02-2024 07:42:33 -0500
You must sign in to add an answer. If you do not have an account, you can register here