You can translate the document:

Overview

This document is the fourth part of a series of articles about how to achieve the best performance while working with the Denodo Platform.

These articles will provide best practices on a variety of different topics such as the way you should build your model, the different configuration settings you should use and the caching method that best suits your scenario. This guidance is especially important in analytic use cases which require integrating large volumes of data from different sources.

The first document of this series provides general guidelines on how to design views that are optimal from a performance point of view. The second document contains details regarding the meta-information and configuration settings connected to the Query Optimizer. The third article incorporates recommendations for different aspects of the Cache Module. This fourth document is aimed at Denodo administrators and developers interested in analyzing and improving the performance of a query that is not reaching the expected goals, knowing that the origin of this issue is the query itself regardless of server congestion situations.

Denodo architecture

Denodo enables business applications to execute real-time queries on a series of distributed and heterogeneous data sources, as if all the data were contained in a large “Virtual” Database. Virtual DataPort acts as a mediator that provides a structured and unified view of the data included in all the data sources of the system.

Slightly simplified, considering a single data source, Denodo's architecture involves executing a query on the data source, data transmission through the network from the source to Denodo, transformations and combinations performed at the Denodo server, data transmission through the network from Denodo to the client and finally, client consumption.

Considering the execution model, when there exists a time-consuming query, the bottleneck can be one of the following:

  • The source. High processing time may be caused by the processing time in some of the data sources involved in the query, or by slow network communication between the Virtual DataPort Server and the data source.

  • The processing at Virtual DataPort. Denodo is responsible for optimally distributing processing between the Virtual DataPort server and the sources. As the final step in the execution process, the execution engine post-processes and combines the results if necessary. There exist several factors that can make this step slower than it should be.

  • The client. Client applications consuming Denodo results may be causing the high processing time due to slow processing, or due to a high latency network connection between Denodo server and the client.

How to use this document

This document is intended as a guide for identifying and solving bottlenecks. In order to achieve this goal, there are two main sections:

  • Identifying the bottleneck. Guidelines to determine which part of the architecture is responsible for the bottleneck: the source, the client or Denodo.  In order to do that it is structured in 3 subsections, each of them focus on one of these areas:
  • Identifying the client as the dominant factor
  • Identifying the source as the dominant factor
  • Identifying Denodo as the dominant factor

Each subsection will include:

  • Detailed information about what parameters of the execution trace should be analyzed or what test should be performed. The Execution plan basics section provides some basic information on how to obtain or read execution plans.

  • A flowchart that summarizes the necessary steps to determine whether the bottleneck is in that part of the architecture or not. Following the diagram of a sub-section it is possible to  achieve the cause (one of the three parts of the architecture) or it is possible to find a square item indicating that the search must continue in another section. Note that these diagrams will use the following color system to help associate what step belongs to what part of the architecture:

Note: The complete flowcharts can be found in an Appendix at the end of this document.

  • Solving the bottleneck. Once the origin of the problem is identified, in this part of the document possible solutions will be explained that will have to be assessed depending on the specific scenario. 

Execution plan basics

An execution plan contains the process the execution engine will follow or have followed to answer a query. Execution plans, also called traces, are therefore the main tool for troubleshooting a poor performing query.  

In order to get the trace of a query:

  • After executing the query from a VDP administration tool Execution panel, you can see the execution trace in the 'Query Results' section

Additionally, it is possible to see the query plan of a query before executing it by clicking on the Query plan button. The Query plan allows to know how the Server will execute the query and check if the appropriate optimizations are being applied as well as if all possible operations are being delegated to the data source.

  • When executing the query from outside a VQL shell add a 'TRACE' clause at the end of any query.

        

SELECT * FROM actor TRACE

        

To see the query plan that Denodo will follow without running the query execute the command ‘DESC QUERYPLAN’.

DESC QUERYPLAN SELECT * FROM actor

Using the Denodo Virtual DataPort Administration Tool, by clicking the button 'Execution trace' the graphical representation of the query plan can be seen.

Using the graphical representation it is possible to click on the tree nodes to display their properties.

Swapping, Memory limit reached and Effective time are some of the significant properties when identifying a bottleneck.

Note that each one of the nodes at the bottom, also called leaf nodes, contain information about an access to a data source. Its properties Execution time, Response time, SQL sentence and Connection time are also relevant in the study of bottlenecks.

Detailed information about these properties and how to interpret them in sections ‘Identifying the bottleneck’ and ‘Solving the bottleneck’.

Identifying the bottleneck

If the performance of a query is not as good as expected, the first step is to analyze if the cause of that long query execution is directly related to that query or if it is a consequence of a general degradation of the server that may not be related to that specific query (for instance if the Denodo server has reached the maximum memory limit available in an environment with high concurrency). This document is focused on analyzing the performance of a query that is not affected by other external factors. The latter cause will be the subject of a future document in these series but take into account that it is possible to review the overall state of the system using the Diagnostic and Monitoring tool (for real time information) or the Denodo Monitor Tool (for historical information).

In order to identify where the bottleneck is the first step is to determine the dominant factor in the total execution overhead: the source, the client or Denodo. For this, the information displayed in the graphical representation of the execution trace of the query is usually the best source of information.

Identifying the client as the dominant factor

Let’s start the troubleshooting process by taking a look at the client side.

If the client application is reading the results slowly, Denodo will start accumulating the result rows that are waiting to be delivered in a memory buffer. In order to avoid using too much memory, when this happens Denodo may need to pause the query execution until the client application has read more rows.

To identify this situation review the top node of the tree diagram and check if the parameter Memory limit reached has been achieved.

If this happens, it is possible to measure the impact of the client processing by executing the same query adding the option 'cache_return_query_results'='false' in the CONTEXT clause.

select * from test CONTEXT('cache_return_query_results' = 'false')

This will execute the same query but this time Denodo will not send any data to the client. That allows to avoid the possible interference generated  by the Denodo to client connection or by the client application itself. Therefore, when the execution time is significantly better after this modification in the query, the dominant factor in the overhead under study is the client processing or the Denodo to client application network.

Under these conditions, it would be necessary to identify if the bottleneck is the client itself or the Denodo to client application network. In order to do so, a third party tool installed in the same host as the current client consumer can be used to create a JDBC connection to the Virtual DataPort server. Finally, run the query from this new client making sure the entire result set is retrieved (these tools usually fetch 1000 rows by default). If the execution time is better than when the client application runs the query fetching the entire result, the bottleneck is caused by the client application. Otherwise, there is a network performance issue between Denodo and the client that is consuming the data.

Note: Be aware that it is possible to check and modify the value of Maximum size in memory of intermediate results to increase the size of the buffer holding the result to avoid pausing the query. However be mindful that excessive memory consumption by processing nodes may lead to other issues. Find more information about this option in the section ‘Review the memory usage configuration’ in this document.

Identifying the source as the dominant factor

If the client is dismissed as a bottleneck the next step would be paying attention to the processing time at the source layer.

First of all, review the leaf nodes of the execution trace diagram since they contain information about the retrieval of data from the sources.

 

An important property to check on these nodes is the Response Time, that shows the moment when the data source sent the first row to the Denodo Server. The impact of the time to get the first row can be used as a measure to identify the source as a dominant factor. This value can be calculated by subtracting the value of the Start time attribute from the Response time attribute.

 

If the execution plan includes several leaves over the same data source, check if the property Connection time is significantly high for some of them. If this happens check if the number of leaf nodes accessing that same data source is over the maximum number of connections allowed in the connection pool configuration. In this case, the dominant factor will be that the number of connections needed is over the number of connections configured for that data source. For more details about this issue check the section Dominant factor: Too many connections to a data source.

To check the maximum number of active connections allowed, open the data source, go to the ‘Advanced tab and expand the ‘Connection Pool configuration’ section:

Finally, note that if the response time and the connection time are low while the total execution time in the source is high it will most likely not be a source processing problem. Instead, it is usually a symptom while the real cause may be one of the following (at the Denodo layer):

  • The number of rows that the data source has to return is unnecessarily too high, therefore. In this case check the section Dominant factor: Too many rows from a data source.
  • There is some operation pausing data retrieval from this source. For more information about this behavior see the section ‘High execution time on a data source due to memory management policies’.

Only after discarding these other possible bottlenecks continue exploring if the dominant factor is the source. In this case check the following possible causes:

  • Too many connections to that data source. Even if a data source has enough connections configured in the connection pool, if a query has several leaf nodes to the same data source this high concurrency over that data source can affect its performance. It can happen that one of the queries sent to the data source has a good performance if it runs isolated from the others, but if Denodo is sending several queries concurrently to the same source the performance decreases due to the extra load. To identify if this is the case, we have two options:

  • Open the affected data source and create a new base view using the option ‘Create from query’ using the SQL sentence in the SQL sentence parameter of the leaf node (the one Denodo is sending to the data source). Then execute the new view.

  • Reduce the original query to isolate the part of the query that is pushed to the data source in the affected leaf node.

If the execution time of the isolated query is significantly better than in the original query, review the model to see if there is something that is preventing the delegation of some operations to that data source. For more details about this see the section ‘Review the model.

  • It takes a long time for the data source to return its result. To identify if this is the case get the query executed by Denodo in the data source reviewing the SQL sentence parameter, available in leaf nodes of the execution trace (see the Getting the SQL query delegated by a derived view document of the Knowledge Base), and execute it directly against the data source to check the execution time avoiding transferring data over the network. If the time is still high talk to the data source administrator to analyze the poor performance of this query in that data source. For more details see the section ‘Dominant factor: Slow source’

  • Network performance between Denodo and the source is slow. If the number of rows to transfer from the source to Denodo is significant, another possible cause to evaluate is the speed of the network. For the purpose of determining if this is the cause, we need to compare the times between the query executed from Denodo and executed from another JDBC client, such as e.g. Jisql. In order to perform an accurate comparison, take into account:

  • The third party JDBC client must be installed in the same host as the Virtual DataPort server and using the same JDBC driver, connection details and credentials that are used in the connection to the source from Denodo.

  • Run the query in both clients making sure the entire result set is retrieved, (these tools usually fetch 1000 rows by default). Note that the query needed to execute in the third party JDBC client is the SQL query that Denodo delegates to that data source (not the query sent to Denodo by the client). This query can be found in the property ‘SQL sentence’ of the leaf node being analyzed (see the Getting the SQL query delegated by a derived view document of the Knowledge Base).

  • To remove the time corresponding to the Denodo to client hop that does not exist in the third party execution, execute the same query adding the option cache_return_query_results='false' in the CONTEXT clause. 

select * from test CONTEXT('cache_return_query_results' = 'false')

If the execution time is similar in both cases and the source is not the cause of the bottleneck, it means the bottleneck may come from the network between the source and Denodo.

High execution time on a data source due to memory management policies

Denodo Virtual DataPort uses memory management methods in order to avoid memory overflows during execution time. For instance, it uses memory buffers that can trigger a stop in the retrieval of data from data sources when they are full. Data retrieval is resumed automatically when more memory  is available. The Memory limit reached attribute, that can be found in the nodes under the Advanced Properties section of the trace, will show if this limit was reached. If that is the case it shows that there are rows accumulated in a memory buffer waiting to be processed, thus stopping the retrieval of data and reducing the performance of the operation.

For example, if we want to obtain the account data from customers and the information is stored in a SaaS application, but we also need some extra information that is stored in an Oracle database. Since accessing the SaaS application is slow (data needs to go through a WAN), Oracle’s rows arrive first at a much faster pace, and will be accumulated in the memory buffer, waiting to be processed by the upper nodes that also need data from the slower branch coming  from the SaaS data source.

The size of the buffers is controlled by the value  of Maximum size in memory of intermediate results. Check and increase the value for that property, if necessary. However, be mindful that excessive memory consumption by processing nodes may lead to other issues, find more information about this option in the section Review the memory usage configuration’. 

Identifying Denodo as the dominant factor

Finally, when sources and clients have been rejected as the cause of an overhead, review the execution trace in intermediate nodes. In these nodes check the Effective time attribute, available since Denodo update 20190903. It shows the time Denodo has spent on executing the operation represented by that node disregarding the time spent waiting for other nodes of the execution tree to complete their work. Therefore, it serves as a sign of a bottleneck in that step. It is recommended to start by checking JOIN, GROUP BY, SEMIJOIN, MINUS, INTERSECT and FLATTEN nodes because these operations are potentially more costly. Afterwards, review the remaining nodes since, even if they correspond to an operation that appears to be less costly like a projection, they can include an ORDER BY or a DISTINCT, that can require more execution time.

If the effective time property is significantly high see the section ’Dominant factor: Slow operation in Denodo’.

Notice that if there exists a costly operation in Denodo you may also notice that the execution time in some of the data sources increases. This is just a consequence of a memory control mechanism that can pause the row retrieval to avoid holding too much data in memory waiting to be processed (to find more information about this mechanism in section 'High execution time on a data source due to memory management policies').

Solving the bottleneck

After identifying one of the three main points of Denodo's architecture as a dominant factor in the execution time it is necessary to implement proper solutions depending on the detected scenario. The following subsections detail what are the different alternatives that can improve the performance depending on the dominant factor.

Dominant factor: Client

When the execution of a query is being too slow due to Client, the solution will depend on whether the cause of the bottleneck is the network or the client application itself. 

Dominant factor: Slow network Client - Denodo

Slow network transfer speeds between the Denodo server and the client application can be addressed in different ways:

  • Denodo offers an option to optimize communications between the Virtual DataPort server and the client application for high latency networks. If the client accesses Denodo through a WAN connection there are two possible ways to activate this optimization depending on the client:

  • Administration Tool: Click Tools > Admin Tool Preferences and select the Connection section. Select the Use WAN optimized communications with the aim of optimizing communications with the Virtual DataPort server. Note that this option may reduce performance if used in LAN environments.

  • JDBC client: Adding the wanOptimized parameter of the JDBC driver. If true, the driver enables several features that reduce the latency in the communications between the client application and Virtual DataPort server.

        Also note that if the network latency is a problem and the Denodo server has the SSL connections enabled the Use only SSL communications option must be checked or the parameter ssl=true added in the JDBC connection URL. The reason is that, by default, the driver tries to establish a non-SSL connection with the Server. If SSL is enabled on the Server, the connection fails and immediately, the driver tries to establish an SSL connection. However,  with the mentioned configuration, the first connection attempt must be used.

  • In order to transfer data efficiently, Denodo includes parameters for specifying how many rows are fetched in one transfer when accessing data. The number of rows that the client application reads by one call to the Denodo Virtual DataPort server is configured using the parameter Chunk Size. Take into account that if consumer applications regularly access more than the default chunk size, then the number of network trips can be reduced by increasing the chunk size. However, note that this will also increase memory usage because the Denodo server needs to hold that chunk in memory.

Find more information about how to select  the Fetch Size (southbound) / Chunk Size (northbound) in the Optimizing Row Fetching in Denodo document of the Denodo Knowledge Base.

  •  Contact the network administrator about the performance of the network.

Dominant factor: Slow client

If the query processing bottleneck is the client application, analyze the data processing operations performed by the consumer application in order to optimize them. For example, if you are building a report using Tableau, you can see if there are other ways to obtain the same result that perform better or tune the connection to Denodo (See ‘Tableau Capability Customizations Reference’ for details on this).

Dominant factor: Source

When the Source is the cause of a bottleneck, it may be due to a slow source execution or a slow network between the source and Denodo.

Dominant factor: Slow source

After identifying a high response time executing a query due to the source execution, consider the following options:

  • If the data source is a cache data source and the query is using cached data, take into account:

  • Create indexes in the tables created by the Cache Module to improve the speed of data retrieval operations. This can be done from the VDP administration tool in the edition panel of the cached views (See ‘Cache indexes’ in the section ‘Configuring the Cache of a View’ of the Virtual DataPort Administration Guide).

  • When the content of a cached view is invalidated or reaches its time to live, Denodo does not immediately delete expired/invalidated rows from the physical tables of the database. In turn, Denodo provides a cache maintenance process that periodically removes these rows. Check if the execution of this process is properly configured to avoid performance degradation and excessive storage consumption in the database. For further information about the cache maintenance, see the ‘Cache maintenance’ section of the Best Practices to Maximize Performance III: Caching document  in the Knowledge base.

  • Check the execution plan in the data source. Contact the administrator for that data source to analyze its poor performance. Verify that the data source is choosing the best execution plan for the query (the command to see the execution plan will depend on the data source, for instance use EXPLAIN PLAN in Oracle or SHOWPLAN in SQL Server). Some of the things to verify are: Is it using the available indexes? Would it be useful to define new indexes in some of the tables? Does the remote database contain statistics for those tables?

  • Check the SQL sentence generated by Denodo. The Denodo query optimizer can rewrite queries to maximize their performance. For example, it can remove unnecessary joins or include a partial aggregation. To make sure the SQL sentence Denodo is sending is the best possible query visit the sections ‘Review the model and ‘Review the configuration of the query optimizer.

  • Consider caching. See subsection ‘Consider Caching of the ‘Working on the solution section for information about this option.

  • Consider creating a summary in the data source or the cache (since Denodo 8.0). See subsection  ‘Consider Summaries of the ‘Working on the solution section for information about this option.

Dominant factor: Slow network Source - Denodo

If the network between the source and Denodo is a bottleneck bear in mind the following considerations:

  • Contact the network administrator about the performance of the network.

  • Consider changing the fetch size. Denodo includes a parameter for specifying how many rows are fetched in one transfer when accessing data. The Fetch Size value establishes how many rows are returned in one transfer from the data sources to the Virtual DataPort server and can be configured at the data source level in JDBC/ODBC data sources. This value can improve network usage and minimize access. Nevertheless, note that it only gives the driver a hint and some drivers ignore it, so the fetch size will depend on the underlying driver. See the Optimizing Row Fetching in Denodo  article, in the Denodo Knowledge Base, for detailed information.

  • Considering caching. See subsection ‘Consider Caching of the ‘Working on the solution section for information about this option.

  • Consider creating a summary in the cache data source (since Denodo 8.0). See subsection  ‘Consider Summaries of the ‘Working on the solution section for information about this option.

Dominant factor: Denodo

When the origin of a bottleneck is in the Denodo layer one of these three main scenarios will be the most common:

  1. The number of rows that the Denodo server is retrieving from the data source is too high.

  1. The row processing at Denodo Virtual DataPort, after retrieving them, is slow.

  1. The number of connections to a data source is too high.

If the situation is none of the above, we recommend using the Diagnostic and Monitoring tool (for real time information) or the Denodo Monitor Tool (for historical information) to check the state of the system when the query is executed. This tool allows to detect situations such as peaks of memory usage that may be affecting the performance.   

Dominant factor: Too many rows from a data source

Some things to review when Denodo is fetching too many rows from data sources or it is spending too much time reading rows:

  • Is there something preventing the delegation of some operations? The performance may be reduced when Virtual DataPort is not able to delegate operations that could be executed by the JDBC or ODBC data source. For more information see the Review the model section.

  • Are there unnecessary JOIN operations performed? Denodo’s engine applies branch pruning to remove unnecessary query branches to increase performance. See the Review the model section if this optimization can be applied but it is not being applied.

  • Is there a GROUP BY that is not pushed down when it should be? If the query is asking for aggregated data Denodo should apply the aggregation push-down optimization. For further information, see the Review the model section.

  • Would a Data Movement be a good option? When a query has a Join/Minus/Intersect/Union operation, the Data Movement optimization can lead to a more efficient execution. Review the Consider Data Movement section for further information.

  • Does the Fetch Size parameter have an appropriate value? It defines how many rows are returned per chunk from the data sources to the Virtual DataPort server. It is necessary to seek a balance between reducing the number of accesses to the data sources and the increase in memory use that this produces on the server by bringing too many rows.

To know more about the Fetch Size parameter in Denodo check the article ‘Optimizing Row Fetching in Denodo’ in our Knowledge Base.

Dominant factor: Slow operation in Denodo

When the dominant factor in the execution time is the processing of Denodo operations, the reason is usually one of the following. The operation:

  • Is a JOIN operation that is not using the optimal strategy.
  • Swapping property is set to Yes.
  • Is a subquery in a WHERE clause.

In the uncommon situation that the scenario does not fit any of the previous cases see the section ‘Working on the solution’ to see if any of the alternatives described there can be beneficial for this particular scenario.

Join operation

If the operation is a JOIN, check if the join strategy used is the most appropriate as using one strategy or another can make the execution time change in orders of magnitude.

  • If the cost optimizations are disabled, use the following rule of thumb to see if the join strategy is appropriate for that join:

  • Nested: Use nested only if the amount of data on the left side is small (in the hundreds).

  • Merge: Use merge if it is possible to push each join branch entirely to a data source or the branch is ordered by the join fields.

  • Hash: Use hash otherwise.

  • If the cost optimizations are enabled, review the meta-information of the views involved in the query (statistics, primary keys, indexes and  referential constraints). The optimizer is very powerful when the statistics are complete. However, estimations based on wrong or incomplete statistics can lead to very inefficient execution plans. Therefore, it is very important to make sure the statistics are complete and accurate. In cases where it is not possible to provide reliable statistics it is better to disable the cost-based optimizer (for more information see section ‘View Statistics’ of the document ‘Modeling Big Data and Analytic Use Cases’).

Finally, if the swapping property for the join node is set to yes, check also the considerations in the following sub-section.

Swapping property is set to Yes

When the effective time for an operation is high another interesting property to look at is the ‘Swapping’ property.

Denodo has a swapping mechanism which stores in secondary storage the intermediate results of a GROUP BY, JOIN, ORDER BY or DISTINCT operation. To check if that mechanism was triggered look for the Swapping attribute within the Advanced Properties of the trace.

If that property is true for an operation, it means that it was required to hold in memory too much data (more than a limit configured at server of view level) and therefore Denodo decided to offload that data to disk to avoid exceeding the system resources. This keeps the system in good health but in return that operation will be slower as Denodo will need to write and read to/from disk.

There are several ways to avoid this swapping to happen:

  • For an operation over views from one data source, review the model to see if it is possible to delegate that operation to the source. See section ‘Review the model’ for more details.

  • See if some possible optimizations are missing. For example:

  • If the operation is a join, make sure that join is necessary and if not, review why the optimizer is not pruning that operation. See section ‘Review the model’ for more detailed information.

  • If the operation is an aggregation, review if a full or partial aggregation pushdown of the query to the data sources can be applied (see more information about these optimizations in the Denodo Query Optimizations for the Logical Data Warehouse article of the Denodo Knowledge Base).

  • If the operation is under a partitioned union, make sure it is necessary to access that union branch to get the results for that particular query. If it is not necessary, review the model and read the section ‘Partitioned UNIONs’ of the document ‘Modeling Big Data and Analytic Use Cases’.

  • See section Review the configuration of the query optimizer for more information.

  • Consider data movement: A Data Movement optimization could help improve the performance when it is possible to delegate that costly operation after moving a reasonable amount of data to a data source that can perform the operation. For more details see section ‘Consider Data Movement’.

  • Consider MPP acceleration: With SQL-on Hadoop systems like Impala, Presto or Spark, consider the option of configuring the Parallel Processing optimization that allows to perform costly operations improving the performance (See section ‘Consider MPP acceleration’ for details).

  • Review the swapping configuration: Review if the threshold to start writing data to disk is too low, if the size of the blocks written should be bigger and if the disk has enough space available to store that amount of data without compromising the performance of the system. For more details about this visit the section ‘Review the memory usage configuration’.

  • Consider caching. Check the ‘Consider Caching subsection of the ‘Working on the solution section for information about this option.

  • Consider creating a summary (since Denodo 8). Check the  ‘Consider Summaries subsection of the ‘Working on the solution section for information about this option.

Subqueries in the where clause

If the slow operation is a SEMIJOIN corresponding to a subquery used in a WHERE clause, our recommendation is to avoid the use of subqueries and use the equivalent JOIN instead. This is because the use of subqueries in the WHERE clause in Denodo has some limitations. For more information visit the section ‘Avoid subqueries inside the WHERE clause’ of the document  ‘Modeling Big Data and Analytic Use Cases’.

If the subquery exists as part of a row restriction, alternative ways to apply that security policy can be found.

Dominant factor: Too many connections to a data source

When there are too many connections to a specific data source and that is causing issues (e.g. connection pool exhaustion) is normally due to one of these situations:

  • There are some operations (e.g. custom functions) that despite being applied on views from the same data source are not being delegated as a single query. This could lead to post-calculations in Denodo that prevent join operations from being pushed down.

Please visit the section ‘Review the model’ to see how to modify the model in order to maximize the operations delegated to a data source.

  • There is a JOIN operation using the Nested Parallel method with a high level of parallelism. If this value is not suitable, reduce it to avoid this problem.

If after checking the two situations explained above, it is still necessary to increase the number of connections in the connection pool go to the data source configuration and click the ‘Connection Pool configuration’ link:

Working on the solution

Although he optimal solution will depend on each particular scenario, the way to solving this bottlenecks will involve one or more of the following points:

  • Review the model. There may be an operation preventing delegation or the built model is not suitable for the scenario, which does not take advantage of the query optimizer capabilities. The ‘Review the model subsection of the ‘Working on the solution section, includes more detailed information.

  • Review the configuration of the query optimizer. In order for the Query Optimizer module to be able to apply optimizations correctly, it is essential to make sure all the information that can be relevant for its operation like for example primary keys, indexes, referential constraints or view statistics, is available. In addition, verify that this information is accurate, this involves checking some points such as, for example, cost-based optimization are enabled and view statistics are complete. See the Best Practices to Maximize Performance II: Configuring the Query Optimizer document for more detailed information.

  • Review the memory usage configuration. When the Swapping attribute of the trace indicates that data has been swapped, check if the 'Maximum query size' and 'Maximum size in memory of each node' parameters have an appropriate value.

  • Consider applying one of the following acceleration techniques:

  • Data movement.
  • MPP acceleration
  • Caching
  • Summaries

The following subsections provide more details about each of these different alternatives.

Review the model

A good design of the data model can have a big impact on the query performance. One important factor to look at is making sure the model allows maximizing the delegation to the sources. When a query uses two or more views that rely on the same JDBC or ODBC data source (i.e there is no federation), Virtual DataPort will try to delegate the entire execution to the source database as a single query. This strategy may save a significant amount of execution time in complex views. When Virtual DataPort is unable to delegate the entire process of a certain query to a source database, it will indicate a reason in a parameter called No Delegation Cause, available in the execution trace under Advanced Properties.

If the model includes custom functions, consider developing custom functions that, besides being executable by the Virtual DataPort server, can be delegated to JDBC data sources. That means that when possible, instead of executing the Java code of the custom function, the Server invokes a function of the database. The Virtual DataPort Developer Guide has more information about this Denodo feature.

If the operation is not pushed to the data source due to a custom policy, review if there are alternative ways to apply the same security restrictions without compromising the performance.

Furthermore, the Data Virtualization engine applies query partitioning techniques (branch pruning) to ignore unnecessary query branches that are removed for better performance. Applying this optimization the number of rows that is necessary to bring from the data source to Denodo can be significantly reduced. For a query where branch pruning can be applied but it is not, review the transformations made by the Denodo query optimizer and look for the possibility of their application by, for example, removing redundant branches of queries (Partitioned Unions) which can prevent access to some data sources. Check if the meta-information and configuration settings that are relevant for the Query Optimizer module are correctly declared (see the ‘Review the configuration of the query optimizer section).

Since Denodo update 20200310, when a branch pruning optimization can not be applied an icon will be shown beside the view that could not be delegated and the No simplification cause parameter will show the reason why the simplification was not possible. See an example in the execution trace below:

In addition, if the cost-based optimization is enabled, the execution engine selects the best execution plan based on the estimated cost of executing them. If the query is asking for aggregated data, make sure that Denodo is applying the aggregation push-down optimization and therefore obtaining a partially aggregated result from the source. This optimization will reduce the execution times, in addition to movement of data through the network. If the query is not being rewritten taking into account the aggregation push-down, review if the meta-information and configuration settings that are relevant for the Query Optimizer module are correctly declared  (see the ‘Review the configuration of the query optimizer section).

Finally, check the modeling recommendations in Denodo for an up to date guide on modeling and its performance implications.

Review the configuration of the query optimizer

Denodo query optimizer explores the different but equivalent execution plans that can obtain the result for a certain query, and decides which one would achieve the best performance. Among other options, Denodo can perform query rewritings, this is, Denodo analyzes the query and tries to generate an equivalent statement with better performance.

Make sure that the meta-information and configuration settings that are relevant for the Query Optimizer module (e.g. primary keys, indexes, referential constraints, view statistics...) are correctly declared and they are accurate. For more detailed information, see the Best Practices to Maximize Performance II: Configuring the Query Optimizer document in the Knowledge Base.

Review the memory usage configuration

 

Take into account that if there is either a “memory limit reached” flag in the query or an operation swapping data to disk a good solution to speed up the query is NOT usually changing the memory limits but one of the other approaches described in this section. Be aware that increasing these values can increase the overall memory usage of the system. Therefore make sure the maximum query size is suitable for the number of concurrent queries that can be running at peak.

That said, there are cases where it makes sense to increase the default value if it works better for a specific environment as the new value avoids unnecessary offloading of data to disk.

For those cases, consider modifying the default values by clicking on Server configuration on the menu Administrator and then click Memory under the Execution engine tab.

In this panel configure the following parameters:

  • Maximum size in memory of intermediate results’: sets the maximum size in MB the server can use to hold rows that are waiting to be processed during an execution. If it reaches that limit Denodo will pause the data retrieval from the underlying sources and the property  ‘Memory limit reached’ = ‘Yes’ will show up in the paused nodes of the execution plan.

  • 'Maximum size in memory of each node': indicates the maximum amount of data in MB that an operation performed in Denodo can hold in memory. If it reaches that limit Denodo will start swapping that data to disk and  the property ‘swapping‘ = ‘Yes’ will appear in the node corresponding to that operation in the execution plan.

  • 'Maximum size of the blocks stored in swap':  means the maximum size of the files that store these results on disk when swapping is active.

  • ‘Maximum query size‘ controls the amount of data Denodo can hold in memory for that query considering both the buffers for intermediate results and the swapping limits.

  • Limit query memory usage': When this option is enabled, the Execution Engine will make sure a query does not use more memory than the maximum specified in ‘Maximum query size‘.

Notice that when the option ‘Limit query memory usage' is enabled, the Execution Engine can automatically decrease the values of the ‘Maximum size in memory of each node’ and the ‘Maximum size in memory of intermediate results’ for a particular query to ensure that the query execution will not surpass the limit set in Maximum query size. Find more detailed information about these parameters and their use in the Limit the Maximum Amount of Memory of a Query section of the Virtual DataPort Administration Guide.

See the section Configuring the Memory Usage and Swapping Policy in the Denodo Virtual DataPort Administration Guide for more details.

Consider Data Movement

When there is a Join/Minus/Intersect/Union operation where one of the views is much larger than the others, Virtual DataPort can transfer the data of the smaller views into the data source of the larger view and execute the operation on this last data source. This may offer great performance improvements.

For example, let’s say we want to obtain the number of different prices for a product:

SELECT prod_name, COUNT(DISTINCT price)

FROM sales JOIN product ON(s_prod_id = pid)

GROUP BY prod_name

Whether the GROUP BY operation is slowing down the response because data for sales and product is in different data sources and it cannot be delegated, it is possible to transfer the data of ‘product’, the smaller view, into the data source of ‘sales’ and this way execute the entire operation in this data source.

In order to allow this optimization, remember that the destination data source has to be enabled as a possible data movement target. In addition, to obtain the best performance, make sure the bulk data load settings are configured if they are available for the data source.

It is possible to force the data movement manually or let the cost optimizer decide if that is the best plan to follow. In this case, check that the cost-based optimization is enabled as well as the meta-information and configuration settings that are relevant for the Query Optimizer module (e.g. primary keys, indexes, referential constraints, view statistics...) are correctly declared.

For more information about Data Movement check the section ‘Data Movement’ of the Denodo Virtual DataPort Administration Guide and the section 'Data movement configuration' in the Knowledge base article Best Practices to Maximize Performance II: Configuring the Query Optimizer.

Consider MPP acceleration

With SQL-on Hadoop systems like Impala, Presto or Spark, it is possible to import them in Denodo as any other data source and allow the query optimizer to create temporary tables to accelerate some queries when Virtual DataPort server detects that it would have to perform costly operations on large amounts of data. See the ‘MPP acceleration’ section of the Best Practices to Maximize Performance II: Configuring the Query Optimizer document for more information about configuring the query optimizer in order to use MPP acceleration.

Consider Caching

Denodo provides a Cache Module that allows to, among other purposes, enhance performance, protect data sources from costly queries and reuse complex data combinations and transformations. Detailed information regarding caching can be found in the Best Practices to Maximize Performance III: Caching document.

Consider Summaries

Since version 8, Denodo allows the creation of a new element called Summary, to store common intermediate results that the query optimizer can then use as a starting point to accelerate analytical queries.

Summaries have some advantages over the traditional cache:

  • It is transparent to the user: Unlike the traditional cache, creating an auxiliary view to enable the cache is not needed. The query optimizer will automatically analyze if it can rewrite the incoming query in a way it takes advantage of the data in the summary without the user being aware of its existence.

  • Summaries are not restricted to the data source configured for traditional caching. This means that it is possible to create as many summaries as wanted for the same view/query in different data sources.

The creation, insertion and refreshing process is very similar to the one used for remote tables but unlike these ones, they provide full data lineage to the views used in its definition, in the same way as a standard derived view. This allows, not only keeping the data lineage information, but also invalidating the data automatically if the definition of an underlying view changes. For more information visit the section ‘Summaries’ of the VDP Administration Guide.


Appendix I: Complete flowchart to identify the dominant factor

Step 1) Identify the client as the dominant factor

Step 2) Identify the source as the dominant factor

Step 3) Identify Denodo as the dominant factor

Disclaimer
The information provided in the Denodo Knowledge Base is intended to assist our users in advanced uses of Denodo. Please note that the results from the application of processes and configurations detailed in these documents may vary depending on your specific environment. Use them at your own discretion.
For an official guide of supported features, please refer to the User Manuals. For questions on critical systems or complex environments we recommend you to contact your Denodo Customer Success Manager.

Questions

Ask a question

You must sign in to ask a question. If you do not have an account, you can register here