Exam 40-475 Content
Batch processing and design
- Ingest data for batch and interactive processing
- Ingest from cloud-born or on-premises data, store data in Microsoft Azure Data Lake, store data in Azure BLOB Storage, perform a one-time bulk data transfer, perform routine small writes on a continuous basis
- Design and provision compute clusters
- Select compute cluster type, estimate cluster size based on workload
- Design for data security
- Protect personally identifiable information (PII) data in Azure, encrypt and mask data, implement role-based security, implement row-based security
- Design for batch processing
Real time processing and design
- Ingest data for real-time processing
- Select data ingestion technology, design partitioning scheme, design row key of event tables in HBase
- Design and provision compute resources
- Select streaming technology in Azure, select real-time event processing technology, select real-time event storage technology, select streaming units, configure cluster size, select the right technology for business requirements, assign appropriate resources for HBase clusters
- Design for Lambda architecture
- Identify application of Lambda architecture, utilise streaming data to draw business insights in real time, utilise streaming data to show trends in data in real time, utilise streaming data and convert into batch data to get historical view, design such that batch data doesn't introduce latency, utilise batch data for deeper data analysis
- Design for real-time processing
- Design for latency and throughput, design reference data streams, design business logic, design visualisation output
End to end operation
- Create a data factory
- Identify data sources, identify and provision data processing infrastructure, utilise Visual Studio to design and deploy pipelines, deploy Data Factory Jobs
- Orchestrate data processing activities in a data-driven workflow
- Leverage data-slicing concepts, identify data dependencies and chaining multiple activities, model complex schedules based on data dependencies, provision and run data pipelines
- Monitor and manage the data factory
- Identify failures and root causes, create alerts for specified conditions, perform a restatement, start and stop data factory pipelines
- Move, transform, and analyse data
- Leverage Pig, Hive, MapReduce for data processing; copy data between on-premises and cloud; copy data between cloud data sources; leverage stored procedures; leverage Machine Learning batch execution for scoring, retraining, and update resource; extend the data factory with custom processing steps; load data into a relational store, visualise using Power BI
- Design a deployment strategy for an end-to-end solution
- Leverage PowerShell for deployment, automate deployment programmatically, design deployment strategies for automation
Comments