Azure Datalake Storage Put - Data360_Analyze - Latest

Data360 Analyze Server Help

Product type
Software
Portfolio
Verify
Product family
Data360
Product
Data360 Analyze
Version
Latest
Language
English
Product name
Data360 Analyze
Title
Data360 Analyze Server Help
Copyright
2024
First publish date
2016
Last updated
2024-11-28
Published on
2024-11-28T15:26:57.181000

Uploads files to an Azure Datalake Storage location.

Azure Datalake Storage nodes enable you to access data lakes on Azure storage, so that you can integrate your data flows accordingly. See:

Note: The ADLS nodes have been updated to support Azure Data Lake Storage Gen 2. As part of this change, these nodes will no longer support Gen 1 storage accounts. The following announcement by Microsoft instructs users of Gen 1 to migrate to Gen 2: https://azure.microsoft.com/en-us/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/

Uploading a single file to an Azure Datalake Storage location

  1. Drag an Azure Datalake Storage Put node onto the canvas, then in the RemotePath property, specify the name of the Azure Datalake Storage location to which you want to upload your file.

  2. Provide your Azure AccountName, together with the AccountKey, or the following properties combined, in the relevant field/s:

    ClientID, together with the ClientSecret and the TenantID.

  3. In the LocalPath property, specify the location of the file that you want to upload.

Uploading multiple files to an Azure Blob container

  1. Drag a Directory List node onto the canvas and connect it to an Azure Datalake Storage Put node. Run the Directory List node to generate a list of files in a given location.

  2. On the Azure Datalake Storage Put node, in the RemotePath property, enter the path to the Azure Datalake Storage location.

  3. Provide your Azure AccountName, together with the AccountKey, or the following properties combined, in the relevant field/s:

    ClientID, together with the ClientSecret and the TenantID.

  4. Select the (from Field) variant of the LocalPath property, then specify the name of the input field that references the location of the files to upload.

Tip: For additional information on Azure Datalake Storage, see the Microsoft Azure online documentation.

Properties

FileSystem

Specify the file system of the Azure Datalake Storage.

A value is required for this property.

RemotePath

Specify the path to the Azure Datalake Storage objects.

A value is required for this property.

AccountName

Specify the Azure Account Name.

A value is required for this property.

One of the following should be entered:

  • AccountKey

    The Azure Secret Key.

Or the combination of:

  • ClientID

    The Client ID for the registered app.

  • ClientSecret

    The Client Secret for the registered app.

  • TenantID

    The Tenant ID (directory) for the registered app.

LocalPath

Specify the location of the file to upload to Azure Datalake Storage.

A value is required for this property.

Choose the (from Field) variant of this property to look up the value from an input field with the name specified.

Overwrite

Set to False to prevent overwriting an existing file that has the same name.

The default value is True.

FailureBehavior

Optionally specify what to do when a file fails to upload. Choose from:

  • Error - Report error and stop further processing.
  • Log - Log a warning message and skip the file.
  • Ignore - Skip the file.

The default value is Log.

Enabled

Optionally specify whether the node is enabled or disabled.

You can either choose True or False, or reference another property (see the Using derived property values topic) which will be evaluated to a true or false value.

Disabled nodes are not executed, even if they are selected to run.

The default value is True.

Note: The Enabled property cannot reference (either directly or indirectly) any of the Run Properties of a data flow.

LogLevel

Optionally specify the level at which non-fatal messages are logged.

The lower the level, the more information will be recorded in the log file. Choose from:

  • 0 - Information
  • 1 - Low
  • 2 - Medium
  • 3 - High
  • 4 - Fatal

The default value is 2 (Medium), which can be changed in the ls_brain_node.prop configuration file, by modifying the property ls.brain.node.logLevel.

Inputs and outputs

Inputs: 1 optional.

Outputs: uploaded files, errors.