You can translate the question and the replies:

Performance Bench Marks

I'm currently seeing performance issues when we run quiries extracting data from tables with multimilliion records. The data movement is slow and cpu consumption is quite high during the query execution. As a first step, I would like to compare it to the benchmarks and see if this is expected behaviour or needs changes into configuration. currnet Configuration: 4 processor( 2 CPU) 16 GB Ram Please share the complete list as I will need for other environments.
user
08-01-2018 18:58:43 -0500
code

1 Answer

Hi, As Denodo is a middleware platform, there are a lot of variables involved in trying to create benchmarks. I would double-check that your configuration has taken in to account the [Hardware](https://community.denodo.com/docs/html/browse/6.0/platform/installation/preinstallation_tasks/hardware_requirements/hardware_requirements) and [Software Requirements](https://community.denodo.com/docs/html/browse/6.0/platform/installation/preinstallation_tasks/software_requirements/software_requirements#software-requirements-for-virtual-dataport). In order to investigate the performance issues on your multimillion record queries, I'd suggest performing some [Load Testing with Apache JMeter](https://community.denodo.com/kb/view/document/Denodo%20Load%20Testing%20with%20Apache%20JMeter?category=Performance+%26+Optimization) or a similar tool to isolate where the issues are coming from and determine what Denodo settings you may want to adjust based on your results. Since you are specifically referencing tables with multimillion records, you may also want to look at our tips on [Optimizing Row Fetching](https://community.denodo.com/kb/view/document/Optimizing%20Row%20Fetching%20in%20Denodo?category=Performance+%26+Optimization) or revisit your [Data Movement](https://community.denodo.com/docs/html/browse/6.0/vdp/administration/optimizing_queries/data_movement/data_movement) strategy. For example, look at the other base views involved in the query against the multimillion record table to ensure that they are themselves not high volume, which may not be a good candidate for data movement, even if they are a small recordset in comparison to the target data source. Hope this helps!
Denodo Team
10-01-2018 09:27:29 -0500
code
You must sign in to add an answer. If you do not have an account, you can register here