Latest Activity
Hi Team, I am trying to connect to Denodo server from Azure Databricks using spark read api for jdbc like: url = "jdbc:vdb://%s:%s/%s" % (server, port, vdb) remote_table = (spark.read .format("jdbc") .option("driver", driver) .option("url", url) .opti...
Azure Driver databricks JDBC Denodo vdp jdbc driver Azure databricks
Hi, I am trying to setup Azure Databricks + ADLS as cache. On the Linux server, logged into the docker container runnig Denodo instance, and I installed databricks-cli and configured with token. I can see the directories when executing dbfs ls, which m...
databricks Cache Azure databricks Azure Data Lake Storage Gen2
I have a problem where my requirement is to Build a custom docker image to avail denodo so that later denodo database could be accessed via Databricks . The problem is to set up everything inside docker image so that any developer is able to use denodo...
I need help on Denodo connector to DeltaLake of DataBricks. I am using Simba Connector and able to connect Tables in DeltaLake. But when my DataBricks (cluster) goes down my VSQL query fails to access. So, It always need to start the cluster. which is ...
I have created connection with azure databricks to be used as cache, and able to test bulk load. How this will work in real scenario? I have the mount on dbfs:/mnt/ folder. Created a base view on top of the datasource. But I need to understand how in r...
I am configured bulk data load API with databricks as cache and configured databricks cli (dbfs) in our denodo linux server..it successful when testing Listing HDFS URI contents ... OK Creating HDFS URI temporary directory ... OK Creating table ... OK...
I am getting error when trying to use bulk data load api for databricks source in denod8.0. Configured Databricks source and able to connect successfully. I was able install the databricks-cli in my server and configured databricks token as in in [http...
Would like to know how to configure bulk data load API for Databricks source in denodo 8.0. I have gone through the Bulk data load API docs for Databricks, spark and impala. But I didnt understand how to configure it. Can you please let know the step b...
I am trying to load the CDATA driver jar to connect to data bricks cluster from denodo 8, I am following the instructions to load the driver in class path using the following instructions in the below link https://community.denodo.com/docs/html/browse...
Has anyone connected denodo to a databricks cluster for MPP scaling? I'd like to know the gotchas before I make the attempt myself.