You can translate the question and the replies:

Too many open files error

Hi There is a cache job in Denodo DEV which is throwing below error: Error executing query. Total time 1248.706 seconds. DV_PRDCTREV_DATA_SALES_TEAM [ORDERBY] [ERROR] Received exception with message '/opt/denodo/7.0/work/vdp/swap/bxOkTyYuwM5onqrKTquiTQ(eq)(eq)0.502675665572522.tmp (Too many open files)' I have already tried the option mentioned in the below confluence page. https://community.denodo.com/answers/question/details?questionId=9060g0000000AMyAAM&title=Denodo+Scheduler+Cache+update+jobs+are+failing+due+to+%5BGROUPBY%5D+%5BERROR%5D It is still throwing error.Could you please share other option to try?
user
28-11-2023 08:05:29 -0500
code

1 Answer

Hi, The error generally occurs when you are running a resource intensive query or queries that are using [swapping](https://community.denodo.com/docs/html/browse/7.0/vdp/administration/server_administration_-_configuring_the_server/configuring_the_memory_usage_and_swapping_policy/configuring_the_memory_usage_and_swapping_policy) and the swapping has exceeded the maximum number of open files allowed by your operating system. Swapping is a mechanism that creates temporary files in disk when the memory consumption exceeds a certain limit to avoid memory overflow. So, for this I would follow the steps suggested in the community you have mentioned. Otherwise, this error occurs when the entire query cannot be delegated to the source. In this scenario, the general recommendation is that whenever queries can be delegated, they should be delegated. If entire queries cannot be delegated, I would make sure all possible selections and aggregations are delegated to the source. However, if you are a valid customer, I would suggest you to raise a case at [Denodo Support site](https://support.denodo.com/) to analyze your scenario better. Hope this helps!!
Denodo Team
08-12-2023 07:17:22 -0500
code
You must sign in to add an answer. If you do not have an account, you can register here