You can translate the question and the replies:

Joining Large Tables

There are 11 views in the join. Out of which, six views have rows in the range of 33 to 84 Million. 2 views in the range of 2 to 4 million and the remaining 3 in the range of 220 to 850K rows. It is all simple joins between tables, but due to the size, the query fails to pull the data. All the base views have primary keys, but no associations created so far; statistics are there for these views. Our inference is we will end up with large data set in spite of performance tuning and by which increases the network latency to transmit the data. One approach is to introduce view parameters to filter based on certain conditions, but the need is to get the entire data set for building a holistic dashboard view. My question is how to handle massive datasets and, at the same time, reduce the network latency. What are the best practices to handle large data sets (I did review the performance tuning options in the KB and applying them to speed up, but it is still work in progress)? Please advice.
DB2
user
29-01-2020 13:58:50 -0500
code

1 Answer

Hi, I always go to the KB article as my first stop to optmize the performance but providing that you already went through it my next step would be the [Denodo Cookbook: Query Optimization](https://www.denodo.com/en/document/e-book/denodo-cookbook-query-optimization). This is a more in depth manual on optimization that sometimes is exacly what I need to make that extra tunning that makes everything run a bit smoother. Hope this helps!
Denodo Team
31-01-2020 19:35:22 -0500
code
You must sign in to add an answer. If you do not have an account, you can register here