Pipeline components - Data360_DQ+ - Latest

Data360 DQ+ Help

Product type
Software
Portfolio
Verify
Product family
Data360
Product
Data360 DQ+
Version
Latest
Language
English
Product name
Data360 DQ+
Title
Data360 DQ+ Help
Copyright
2024
First publish date
2016
Last updated
2024-09-26
Published on
2024-09-26T13:01:36.253623

Pipelines are made up of a number of components, collectively known as "data stages". Each data stage is intended to handle a different phase of your organization's data processing workflow.

The following table lists the available data stages, and the user permissions required to create each data stage.

Note: Administrators can create all types of data stage.
Data Stage

Requires

Can be created by
Path Pipeline Users with Write permissions to a Pipeline.
Data Store Path Users with Create Data Store and Write permissions.
Data View One or more Data Stores Users with Create Data View and Write permissions.
Dashboard Data view Users with Create Dashboard and Write Permissions.
Rule Library Path Users with Create Rule Library and Write Permissions.
Analysis Data Store Users with Create Analysis and Write Permissions.
Process Model A set of Data Stages Users with Create Process Model and Write permissions.
Analytic Model Data Store Users with Create Analytic Model and Write permissions.
Case Store One or more Data Stages Users with Create Case Store and Write permissions.

Path

Paths provide an additional layer of organization within pipelines and can contain any combination of data stages that exist within a pipeline, including other paths.

Once a pipeline has been created, you must create at least one path to enable the creation of other data stages.

Paths help to organize a pipeline's structure and enable the administrator to enforce more granular access rights. For example, you could create one path for dashboards accessible only by business users and another path for data stores accessible only by developers.

Data store

Data stores retain and structure data, and they can be used as an input or created as an output by other data stages.

Data stores are containers for tabular information coming from internal or external data sources. They contain all the information from a given source, as opposed to data views, which only contain information from selected fields.

Data view

A Data view is used to select specific fields from data stores that users can then visualize when performing data exploration or building dashboards.

Data views define which fields within a chosen set of data stores can be used in a data exploration or dashboard. Data views contain Dimensions, Load Filters, Query Filters and other user defined properties. These properties determine what will be returned when a data view loads data and what can be visualized when the data view is used to build a dashboard or explore data.

Dashboard

Data360 DQ+ Dashboards enable self service business intelligence that can be shared throughout an organization.

Dashboards are visual representations of the data contained in a data view after it has executed and loaded.

Dashboards residing in pipelines will typically be curated reports created by experienced users. Once created, pipeline dashboards may be shared with end users to serve as interactive reports.

 

Rule library

A rule library allows you to create reusable, data quality rule sets that can be applied to data in an analysis. Reusable rules created with a rule library are used in an analysis by using the Execute Rule Library node.

Analysis

An analysis is essentially a drawing board for data enhancement and analytics. Analyses are used to manipulate data in an existing data store by applying functions to selected fields. The output of an analysis is typically a new data store.

An analysis can be used to manipulate data in novel ways. That data can be placed into a new data store, which can then be used to create other data stages.

Process model

Process models enable the orchestration of tasks performed on other data stages. They can be used to automate many of the manual, repetitive tasks involved in the management of data stages and your organization's data processing workflow.

Process models automate data stage management. For example, you could create a process model that executes a set of data views on a predetermined schedule, so that the data views are always loaded with the most recent and relevant data.

Analytic model

Analytic models are a form of predictive analytics that allow users to perform machine learning by training on historical data sets.

Analytic models have predictive value. They allow users to generate predictions about the future using machine learning techniques.

Case store

A case store allows you to associate records from other data stages in a common repository. These records can then be managed by users of the case store via screens that you define for the case store, to display specific information about cases.