Hi,
In order to configure Databricks for bulk data load, I would perform the following steps:
* Install Python3 and the Databricks Client after which it is possible to set the authentication.
* Once the client application is downloaded. In the Virtual DataPort Administration tool, create a new JDBC data source using Databricks.
* Navigate to the Read & Write tab of the data source, check the option "Use Bulk Data Load APIs" and provide the needed configuration details.
Along with these settings, I would also make sure if the credentials used to connect to the data source has permission to write.
If I was to use a connection setting similar to [Impala](https://community.denodo.com/docs/html/browse/8.0/en/vdp/administration/bulk_data_load/impala#impala), I would make sure to set the environment variables **HADOOP_HOME ** and **HADOOP_USER_NAME** also making sure the user-provided has read and write privileges.
You can take a look at the [Databricks](https://community.denodo.com/docs/html/browse/8.0/en/vdp/administration/bulk_data_load/databricks) section of the Virtual DataPort Administration Guide.
You can also take a look at this Knowledge Base[ How to connect to Azure Databricks from Denodo](https://community.denodo.com/kb/view/document/How%20to%20connect%20to%20Azure%20Databricks%20from%20Denodo?category=Data+Sources) for additional information.
Hope this helps!