Based on the exception you're being given, it sounds like there could be some misconfiguration when trying to connect and authenticate into your hadoop cluster.
I was able to set up a connection to a parquet file by doing the following steps:
- **Click File > Extension Management**
- Import the denodo-dfs-customwrapper or ensure that it is there.
- **Create a New > Data Source > Custom**
- Select Jars > Select the denodo-dfs-customwrapper
- Select Class name: com.denodo.connect.dfs.wrapper.DFSParquetFileWrapper
- **Enter the input parameters of your data source**
- File system URI: in your case, hdfs://sdclxd00918.nomura.com:5181
- Set up the **core-site.xml** to authenticate into your hadoop cluster. As I understand it, you have not set this up yet. To do this, you need to go into the conf folder of the downloaded dfs custom wrapper, for example: **/denodo-dfs-customwrapper-8.0-20210503/conf/core-site.xml**
- In the **core-site.xml** file, you will find templates to input your credentials for various data sources (i.e. Azure Data Lake Storage, Google Cloud, AWS S3A, etc.). You only need to uncomment the properties for your data source then input the necessary credentials that will let you authenticate into it. More information on the core-site.xml file can be [found here](https://community.denodo.com/docs/html/document/denodoconnects/8.0/en/Denodo%20Distributed%20File%20System%20Custom%20Wrapper%20-%20User%20Manual).
- Save the core-site.xml, then back in the data source in VDP, set the parameter "Custom core-site.xml file" to local and declare the path to the file.
- As for your question on Kerberos, you would only need to enable it if you want to authenticate to your hdfs data source with Kerberos credentials, however setting up the core-site.xml alone should be sufficient.
- **Afterwards, you can try creating a base view.**
- Your parquet file path looks fine, but you need to define the file name pattern too. I would use this: **(.*)part-(.*)\\.snappy\\.parquet**
So in summary, try setting up the core-site.xml to authenticate into your hdfs data source, and define the parquet file name pattern as well.
Hope this helps!