You can translate the question and the replies:

Denodo MPP

I want a sample document for getting MPP done in denodo with PrestoDB or Impala.Configure MPP Query Acceleration in Denodo.
02-03-2022 07:52:27 -0500

6 Answers

Hi, Denodo supports the integration of Presto and Impala with the Hadoop-based MPP systems. This parallel processing can be mainly used to process huge volumes of data distributed across different data sources. In order to configure the MPP Acceleration with Presto and Impala in Denodo you could refer to the [How to configure MPP Query Acceleration in Denodo]( Knowledge Base Article for detailed information. Hope this helps!
Denodo Team
03-03-2022 01:05:40 -0500
I checked on the above given link already. I am not able to figure it out how to configure MPP. Please can you provide a straight forward steps on how to configure MPP in denodo. With working demo or video on how to do it step-wise.
03-03-2022 01:43:37 -0500
Hi In order to configure MPP to Presto and Impala, I would follow the below steps in the Virtual DataPort Administration Tool: * Navigate to the Administration > Server configuration option. In the Queries optimization tab, enable the Automatic cost based optimization and Automatic simplification of queries option. * Then select the datasource for Parallel Processing acceleration. It is possible to select the cache data source or a specific JDBC data source. * For creating a [JDBC data source]( to support MPP, right-click on the database and navigate to New > Data Sources > JDBC. * In the Configuration tab, click on the Connection tab and in the Database adapter select the required adapter for Presto or Impala. * Specify the Presto or Impala related information such as Driver class path, Driver class,etc in the Configuration tab. * To configure the MPP, click on the Read & Write tab and check the Use Bulk Data Load API checkbox. * Then specify the relevant Hadoop information like file system, HDFS URI and server time zone. * Click Test connection and save. By performing the above steps, I will be able to configure the MPP in Denodo. Refer to the [Configure VDP]( and [MPP Query Acceleration: Sizing guidelines]( Knowledge Base Article for more information. Hope this helps!
Denodo Team
03-03-2022 03:35:47 -0500
I am trying spark version 2.4.8 and Hadoop 2.10.1. I have installed both in one of my Virtual Machine (Ubuntu 20.04.4 LTS) in GCP. Now I want to configure MPP with spark/hadoop. Please provide me steps to proceed on this. I want to add a new JDBC datasource ? What will be my details in COnnection Tab ? Database adapter: Spark SQL 2.x ??? Database URL: jdbc:hive2://host:port/database ??? Transaction isolation: ???? Authentication: ???? What will be my details in Read Write tab? enable Use bulk data load APIs ?? Work path: Default File system: HDFS HDFS URI: ???? Server time zone: ???? Africa/Abidjan Batch insert size (rows): ??? and What should I fill in Hadoop Properties
15-03-2022 03:28:14 -0400
Any update on this ?
25-03-2022 05:02:56 -0400
Hi To configure MPP to Spark, I would choose the Database adapter as Spark SQL 2.x and follow the above steps mentioned in the previous comment. For more information, you could refer to the [How to configure MPP Query Acceleration]( in Denodo Knowledge Base Article. Additionally, if you need further assistance and if you are a user with valid support access, create a support case in [Denodo Support Site]( so that our support team will assist you. Hope this helps!
Denodo Team
25-03-2022 06:23:53 -0400
You must sign in to add an answer. If you do not have an account, you can register here