Ingest from cloud-born or on-premises data, store data in Microsoft Azure Data Lake, store data in Azure BLOB Storage, perform a one-time bulk data transfer, perform routine small writes on a continuous basis
Design and provision compute clusters
Select compute cluster type, estimate cluster size based on workload
Design for data security
Protect personally identifiable information (PII) data in Azure, encrypt and mask data, implement role-based security, implement row-based security
Design for batch processing
Real time processing and design
Ingest data for real-time processing
Select data ingestion technology, design partitioning scheme, design row key of event tables in HBase
Design and provision compute resources
Select streaming technology in Azure, select real-time event processing technology, select real-time event storage technology, select streaming units, configure cluster size, select the right technology for business requirements, assign appropriate resources for HBase clusters
Design for Lambda architecture
Identify application of Lambda architecture, utilise streaming data to draw business insights in real time, utilise streaming data to show trends in data in real time, utilise streaming data and convert into batch data to get historical view, design such that batch data doesn't introduce latency, utilise batch data for deeper data analysis
Design for real-time processing
Design for latency and throughput, design reference data streams, design business logic, design visualisation output
End to end operation
Create a data factory
Identify data sources, identify and provision data processing infrastructure, utilise Visual Studio to design and deploy pipelines, deploy Data Factory Jobs
Orchestrate data processing activities in a data-driven workflow
Leverage data-slicing concepts, identify data dependencies and chaining multiple activities, model complex schedules based on data dependencies, provision and run data pipelines
Monitor and manage the data factory
Identify failures and root causes, create alerts for specified conditions, perform a restatement, start and stop data factory pipelines
Move, transform, and analyse data
Leverage Pig, Hive, MapReduce for data processing; copy data between on-premises and cloud; copy data between cloud data sources; leverage stored procedures; leverage Machine Learning batch execution for scoring, retraining, and update resource; extend the data factory with custom processing steps; load data into a relational store, visualise using Power BI
Design a deployment strategy for an end-to-end solution
Leverage PowerShell for deployment, automate deployment programmatically, design deployment strategies for automation
You might get a detail error message as follows :-
Missing value for AzureWebJobsStorage in local.settings.json. This is required for all triggers other than httptrigger, kafkatrigger. You can run 'func azure functionapp fetch-app-settings ' or specify a connection string in local.settings.json.
The fix, just add this in to your local.settings,json
This means :- sonarqube don't have any info about your project. you need to setup your project in sonarqube.
The end goal here is to get your login - which is a guid for example, "9f6cf20347297da17ce4591b1d2ada820a30d23e" and your project key which is a unique alphanumeric value to differentiate your project.
Goto Create Project - > Generate a token -> Provide your project name -> Select C# or VB or Java (when prompt Run analysis on your project"
At the end of the dialog you will get Login :- which is a has and Project Key.
Then you pass those info into MsBuild or whatever build tools you currently using.