HDFS Path

Use this type of path to obtain the data from a file or a set of files located in a HDFS file system. You can also use this path type with other routes if there is a connector for Hadoop.

Find information about the Filters tab in Compressed or Encrypted Data Sources; filters work the same way for any type of path (local, HTTP, FTP…).

Configuration

In URI, enter the path you want to obtain the data from. It can point to a file or a directory and you can use interpolation variables (see section Paths and Other Values with Interpolation Variables).

In Hadoop properties you can set the same Hadoop properties that you would put in the Hadoop configuration files like core-site.xml. This allows to use other routes if there is a Hadoop connector. This is explained in the section Support for other routes

Paths Pointing to a Directory

When you create a base view over a data source that points to a directory, Virtual DataPort infers the schema of the new view from the first file in the directory and it assumes that all the other files have the same schema.

Only for delimited-file data sources: if the path points to a directory and you enter a value in File name pattern, the data source will only process the files whose name matches the regular expression entered in this box. For example, if you only want to process the files with the extension log, enter (.*)\.log.

Note

For XML data sources, if a Validation file has been provided, all files in the directory have to match that Schema or DTD.

Support for Hadoop-Compatible Routes

You can use the HDFS option for other routes if there is a connector for Hadoop. For example, if you configure the ABFS connector, you can work with Azure Data Lake Gen 2 routes. The steps to use these routes with the ABFS connector are:

  1. Download the jars which are necessary for the Hadoop adapter. In this case, the ABFS connector needs hadoop-azure.jar. It is important to obtain the connector for the version of the Hadoop libraries distributed by the platform.

  2. Import the jar in the platform:

    1. Click the menu File > Extension Management. Then, in the tab Libraries, click Import.

    2. Select jar as resource type and click Add to add the jar.

    3. Restart Virtual DataPort.

  3. In the URI, enter the path. It is something like this “abfs://<container>@<account_name>.blob.core.windows.net/<path>/<file_name>”.

  4. Configure the Hadoop properties. They are necessary to configure the authentication. In the Hadoop documentation you can check the available methods and the properties to configure them. For example, to use OAuth 2.0 client credentials use these properties:

    Name

    Value

    fs.azure.account.auth.type

    OAuth

    fs.azure.account.oauth.provider.type

    org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider

    fs.azure.account.oauth2.client.endpoint

    <URL of OAuth endpoint>

    fs.azure.account.oauth2.client.id

    <The client identifier>

    fs.azure.account.oauth2.client.secret

    <The client secret>

  5. In the authentication section select None.

Authentication

These are the authentication methods available:

  • None: use this option if the HDFS server does not require authentication.

  • Simple: you have to configure the user name. This authentication mode is equivalent to use the HADOOP_USER_NAME variable when you execute the Hadoop commands in a terminal.

  • Kerberos with user and password: you have to configure the user name and the password.

  • Kerberos with keytab: you have to configure the user name and you have to upload the keytab.