As mentioned in the [Denodo Distributed File System Custom Wrapper](https://community.denodo.com/docs/html/document/denodoconnects/7.0/Denodo%20Distributed%20File%20System%20Custom%20Wrapper%20-%20User%20Manual#h.ftz4bpl2u6o0) user manual, in order to create base view over the Parquet File I will use (H)DFSParquetFileWrapper customer wrapper (class name: com.denodo.connect.hadoop.hdfs.wrapper.HDFSParquetFileWrapper). Similarly, to create base view over the csv format file, I will use (H)DFSDelimitedTextFileWrapper custom wrapper (class name: com.denodo.connect.hadoop.hdfs.wrapper.HDFSDelimitedTextFileWrapper).
You can also refer to similar Q&A [here](https://community.denodo.com/answers/question/details?questionId=9064u000000CfAtAAK&Export+base+view+to+Azure+data+lake+storage), which explains in detail how to store/export the results in a Parquet file by developing custom exporter for Scheduler based on which the results can be stored in a Parquet file.
In case, if you still need help and if you are a valid Support User, you may open a Support Case at the Denodo Support Site and the Support Team will help you.
Hope this helps!