If i am valid support user and then would need to connect to Azure Data Lake storage from the Virtual DataPort Server and read the xlsx, parquet, avro files from them, then i would make use of the [Denodo Distributed File System Custom Wrapper](https://community.denodo.com/docs/html/document/denodoconnects/latest/Denodo%20Distributed%20File%20System%20Custom%20Wrapper%20-%20User%20Manual) to connect with it. For detailed steps, refer to the [How to Connect to Azure Data Lake Storage from Denodo Platform](https://community.denodo.com/videos/details/Cloud/Denodo%207.0/How%20to%20connect%20to%20Azure%20Data%20Lake%20Storage%20from%20Denodo%20Platform) Video.
In order to export the output of the base view into the Azure DataLake Storage as .xslx or .csv file, i would either create a [custom handler](https://community.denodo.com/docs/html/browse/7.0/scheduler/administration/developer_api/extensions_plugins/handlers) or [mount the Azure DataLake Storage](https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/azure/azure-datalake#--mount-azure-data-lake-storage-gen1-resource-using-a-service-principal-and-oauth-20) as a local drive and then using the csv exporter to export the data using the scheduler jobs. For more information, refer to the [Postprocessing Section](https://community.denodo.com/docs/html/browse/7.0/scheduler/administration/creating_and_scheduling_jobs/configuring_new_jobs/postprocessing_section_exporters) of the Scheduler Administration Guide.
Hope this helps!