Do you want to learn how to how to build data quality projects in Azure Data Factory using data flows to prepare data for analytics at scale? In a recent webinar, Sr. Program Manager on the Azure Data Factory team, Mark Kromer, shows you how to do this, without writing any Spark code.
This presentation does not focus so much on Azure Data Factory in general, but instead the focus is on how to begin building and understanding data quality and using Data Factory as the ‘data engineer tool’ for building ETL patterns and data pipeline patterns of data quality in Azure.
Azure Data Factory is a broad platform for data movement, ETL and data integration, so it would take days to cover this topic in general. This hour webinar covers mapping and wrangling data flows.
The presentation spends some time on Data Factory components including pipelines, dataflows and triggers. But most of the time is spent diving into data quality for data warehousing (including demos) and 6 common things you’ll do daily if you’re job is ETL:
- Verify data types and lengths
- How to handle NULLs
- Domain value constraints
- Single source of truth (master data)
- Late arriving dimensions
So, if you’re looking to learn how to how to build data quality projects in Azure Data Factory using data flows then this webinar is for you. You can watch the complete webinar below. To see the presenter’s slides, click here.
Need help with using your data and the cloud to grow your business? Our consulting offerings can help you move, manage or expand your business in the cloud. No matter where you are in your cloud journey, our experts can get you where you want to be. Click the link below to learn more.