Hi,
As Denodo is a middleware platform, there are a lot of variables involved in trying to create benchmarks. I would double-check that your configuration has taken in to account the [Hardware](https://community.denodo.com/docs/html/browse/6.0/platform/installation/preinstallation_tasks/hardware_requirements/hardware_requirements) and [Software Requirements](https://community.denodo.com/docs/html/browse/6.0/platform/installation/preinstallation_tasks/software_requirements/software_requirements#software-requirements-for-virtual-dataport). In order to investigate the performance issues on your multimillion record queries, I'd suggest performing some [Load Testing with Apache JMeter](https://community.denodo.com/kb/view/document/Denodo%20Load%20Testing%20with%20Apache%20JMeter?category=Performance+%26+Optimization) or a similar tool to isolate where the issues are coming from and determine what Denodo settings you may want to adjust based on your results. Since you are specifically referencing tables with multimillion records, you may also want to look at our tips on [Optimizing Row Fetching](https://community.denodo.com/kb/view/document/Optimizing%20Row%20Fetching%20in%20Denodo?category=Performance+%26+Optimization) or revisit your [Data Movement](https://community.denodo.com/docs/html/browse/6.0/vdp/administration/optimizing_queries/data_movement/data_movement) strategy. For example, look at the other base views involved in the query against the multimillion record table to ensure that they are themselves not high volume, which may not be a good candidate for data movement, even if they are a small recordset in comparison to the target data source.
Hope this helps!