Opportunity Lies in Every Challenge.
Profile | Portfolios | Credentials | Connect
.png)
Technology Solutions Portfolio
Mayer Putra
Transforming Ideas into Scalable Solutions.
Showcasing my expertise in designing, deploying, and optimizing cloud-based systems
to drive innovation and business growth.
1. Data Pipeline Creation: Real-Time Data Processing with Azure Stream Analytics
2. Predictive Analytics: Sales Forecasting Using Azure Machine Learning
3. Cloud Cost Optimization: Reducing Azure Expenditure by 25%
4. Security Compliance: Achieving GDPR Compliance in Azure
5. Infrastructure as Code: Automating Azure Infrastructure Deployment with Terraform
6. SSL Certificate Management and Renewal Automation
7. Load Balancer and VMSS Setup for Scalability
8. API Integration, Data Management, and Container Deployment for P2P Mobile Payment App
Technology Solutions Archive
Project Overview: Develop a real-time data processing pipeline for a financial services company. The goal was to ingest, process, and analyze stock market data to provide instant insights to traders.
Technology Stack
Data Ingestion: Azure Event Hubs
Data Processing: Azure Stream Analytics
Data Storage: Azure Data Lake Storage and Azure SQL Database
Visualization: Power BI
Key Achievements
High Throughput: The pipeline processes millions of data points per minute, ensuring traders receive timely and actionable insights.
Scalability: Designed to scale effortlessly to accommodate surges in data volume, ensuring reliable performance during market peaks.
Cost Efficiency: Leveraged Azure Data Lake Storage for affordable, scalable storage, with lifecycle management to optimize costs.
Challenges and Solutions
Challenge: Handling data spikes during market open and close.
Solution: Implemented dynamic scaling in Azure Event Hubs and Stream Analytics to maintain consistent performance during peak trading hours.
Code Snippets
# Azure Stream Analytics Job creation
{
"type": "Microsoft.StreamAnalytics/streamingjobs",
"apiVersion": "2019-06-01-preview",
"location": "[resourceGroup().location]",
"properties": {
"sku": {
"name": "Standard"
},
"outputStartMode": "JobStartTime",
"outputErrorPolicy": "Stop",
"functions": [],
"inputs": [
{
"name": "StockMarketData",
"properties": {
"type": "Stream",
"serialization": {
"type": "Json",
"properties": {
"encoding": "UTF8"
}
},
"datasource": {
"type": "Microsoft.ServiceBus/EventHub",
"properties": {
"eventHubNamespace": "[parameters('eventHubNamespace')]",
"eventHubName": "[parameters('eventHubName')]",
"sharedAccessPolicyName": "[parameters('sharedAccessPolicyName')]",
"sharedAccessPolicyKey": "[parameters('sharedAccessPolicyKey')]"
}
}
}
}
],
"outputs": [
{
"name": "OutputSQLDB",
"properties": {
"datasource": {
"type": "Microsoft.Sql/Server/Database",
"properties": {
"database": "[parameters('sqlDatabaseName')]",
"table": "[parameters('outputTableName')]",
"user": "[parameters('sqlUser')]",
"password": "[parameters('sqlPassword')]"
}
}
}
}
],
"transformation": {
"name": "RealTimeTransformation",
"properties": {
"streamingUnits": 6,
"query": "SELECT * INTO OutputSQLDB FROM StockMarketData WHERE Price > 100"
}
}
}
}
Reference Link: Real-time analytics with Azure Stream Analytics
Data Pipeline Creation: Real-Time Data Processing with Apache Kafka and Apache Spark
Project Overview: Conducting a cloud cost optimization project to reduce Azure expenditures while maintaining performance and reliability.
Technology Stack
Monitoring: Azure Monitor, Azure Cost Management + Billing
Automation: Azure Logic Apps, Azure Automation
Optimization Tools: Azure Advisor, Azure Reserved VM Instances
Key Achievements
Cost Reduction: Reduced monthly Azure spending by 25% through strategic use of reserved instances, autoscaling, and resource rightsizing.
Performance Improvement: Identified and decommissioned underutilized resources, improving overall application efficiency.
Automation: Implemented automated cost monitoring and alerting, enabling proactive management of cloud expenses.
Challenges and Solutions
Challenge: Balancing cost reduction with performance needs.
Solution: Developed a customized strategy combining reserved instances and spot VMs for cost-effective, high-performance workloads.
Code Snippets
# Terraform code for configuring Azure Reserved VM Instances
resource "azurerm_reservation_order" "example" {
name = "example-reservation"
location = "East US"
reserved_resource = "VirtualMachines"
sku_name = "Standard_DS3_v2"
reserved_resource_quantity = 3
reservation_term = "P1Y"
applied_scope_type = "Shared"
}
resource "azurerm_virtual_machine" "example" {
name = "example-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
vm_size = "Standard_DS3_v2"
reserved_instance {
capacity_reservation_group = azurerm_reservation_order.example.name
}
os_profile {
computer_name = "hostname"
admin_username = "adminuser"
admin_password = "P@ssw0rd1234!"
}
}
Reference Link: Azure VMware Solution design principles
Cloud Cost Optimization: Reducing Azure Expenditure by 25%
Project Overview: Create predictive analytics model for an e-commerce company to forecast future sales based on historical trends, seasonal patterns, and marketing campaigns.
Technology Stack
Data Ingestion: Azure Event Hubs
Data Processing: Azure Stream Analytics
Data Storage: Azure Data Lake Storage and Azure SQL Database
Visualization: Power BI
Key Achievements
High Throughput: The pipeline processes millions of data points per minute, ensuring traders receive timely and actionable insights.
Scalability: Designed to scale effortlessly to accommodate surges in data volume, ensuring reliable performance during market peaks.
Cost Efficiency: Leveraged Azure Data Lake Storage for affordable, scalable storage, with lifecycle management to optimize costs.
Challenges and Solutions
Challenge: Handling data spikes during market open and close.
Solution: Implemented dynamic scaling in Azure Event Hubs and Stream Analytics to maintain consistent performance during peak trading hours.
Code Snippets
# Python code for training a machine learning model using Azure ML
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from azureml.core import Workspace, Dataset, Experiment
from azureml.train.automl import AutoMLConfig
# Load dataset from Azure ML workspace
ws = Workspace.from_config()
dataset = Dataset.get_by_name(ws, name='sales_data')
# Convert to pandas dataframe
df = dataset.to_pandas_dataframe()
# Split the data
X = df.drop(['Sales'], axis=1)
y = df['Sales']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train a RandomForest model
model = RandomForestRegressor(n_estimators=100, max_depth=5, random_state=42)
model.fit(X_train, y_train)
# Evaluate the model
score = model.score(X_test, y_test)
print(f"Model Accuracy: {score:.2f}")
# Register the model in Azure ML
from azureml.core import Model
model_path = 'outputs/sales_forecast_model.pkl'
Model.register(workspace=ws, model_path=model_path, model_name='SalesForecastModel')
Reference Link: How Azure Machine Learning works: resources and assets
Predictive Analytics: Sales Forecasting Using Azure Machine Learning
Project Overview: Security compliance initiative to ensure that a global retailer’s Azure infrastructure adhered to GDPR requirements, focusing on data protection and privacy.
Technology Stack
Data Encryption: Azure Key Vault, Azure Disk Encryption
Compliance Monitoring: Azure Policy, Azure Security Center
Data Privacy: Azure Active Directory, Role-Based Access Control (RBAC)
Key Achievements
Full Compliance: Successfully achieved GDPR compliance, safeguarding sensitive customer data and avoiding potential legal penalties.
End-to-End Encryption: Implemented comprehensive encryption strategies for data at rest and in transit, ensuring maximum data protection.
Continuous Monitoring: Set up continuous compliance monitoring, with automated alerts for any deviations from established policies.
Challenges and Solutions
Challenge: Ensuring compliance across various Azure services.
Solution: Developed a unified compliance framework using Azure-native tools, ensuring consistent policy enforcement across all services.
Code Snippets
# Azure CLI commands for enabling encryption and auditing for GDPR compliance
# Enable encryption at rest with Azure Key Vault
az sql db transparent-data-encryption set --resource-group myResourceGroup --server myServer --database myDatabase --status Enabled
# Configure SQL Server audit policy to log GDPR-sensitive data access
az sql db audit-policy update \
--resource-group myResourceGroup \
--server myServer \
--database myDatabase \
--state Enabled \
--storage-account myStorageAccount \
--retention-days 90 \
--log-audit-events true \
--audit-action-group "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP" \
--audit-action-group "FAILED_DATABASE_AUTHENTICATION_GROUP" \
--audit-action-group "BATCH_COMPLETED_GROUP"
# Create Azure Policy to enforce data encryption across subscriptions
az policy definition create --name "EnforceDataEncryption" --rules '{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Sql/servers/databases"
},
{
"field": "Microsoft.Sql/servers/databases/transparentDataEncryption.status",
"notEquals": "Enabled"
}
]
},
"then": {
"effect": "Deny"
}
}' --mode All
Reference Link: Microsoft’s GDPR Commitments to Customers of our Generally Available Enterprise Software Products
Security Compliance: Achieving GDPR Compliance in Azure
Project Overview: Automate the deployment of cloud infrastructure using Terraform, enabling consistent and efficient provisioning of environments across Azure.
Technology Stack
IaC Tool: Terraform
Cloud Platform: Azure (Azure Virtual Network, Azure Virtual Machines, Azure SQL Database, Azure Storage)
CI/CD Pipeline: Azure DevOps
Key Achievements
Automation: Reduced deployment times from days to minutes, enabling quicker launches of new features.
Consistency: Ensured that every environment (development, staging, production) was provisioned identically, reducing the risk of configuration drift.
Scalability: Designed modular Terraform configurations, allowing the infrastructure to scale automatically based on demand.
Challenges and Solutions
Challenge: Securely managing Terraform state files in a collaborative environment.
Solution: Used Azure Storage with state locking enabled to store Terraform state files securely, facilitating safe collaboration across the team.
Code Snippets
# Terraform code for deploying a complete Azure environment
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "East US"
}
resource "azurerm_virtual_network" "example" {
name = "example-vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "example-subnet"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "example" {
name = "example-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "example-osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "adminuser"
admin_password = "P@ssw0rd1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
}
Reference Link: Terraform on Azure documentation
Infrastructure as Code: Automating Azure Infrastructure Deployment with Terraform
Project Overview: Deploy an automated solution through cloud solutions for managing SSL certificates on the brand’s website, streamlining the renewal process to enhance security while minimizing manual management efforts.
Technology Stack
Cloud Platform: Azure (Azure Key Vault, Azure App Service, Azure Role-Based Access Control)
SSL/TLS Certificates: X.059 certificates, custom domain integration
Certificate Authority (CA): CSR generation, certificate signing, and merging
Security: Secrets and key management, RBAC (Role-Based Access Control)
Key Achievements
Automation: Implemented automated renewal of SSL certificates for the domain, eliminating the need for manual efforts.
Seamless Integration: Integrated Azure Key Vault with Azure App Service to automate certificate management, from generation to renewal and deployment.
Enhanced Security: Utilized Azure Role-Based Access Control (RBAC) to restrict access to sensitive certificates and keys, ensuring that only authorized users could manage secrets.
Operational Efficiency: Reduced downtime risks and manual intervention by automating the entire certificate lifecycle, from CSR generation to installation, contributing to a smoother and more secure operation.
Challenges and Solutions
Challenge: The SSL certificate for the domain had expired, requiring a secure and automated renewal process.
Solution: Utilized Azure Key Vault to generate and store SSL certificates, and integrated with the Certificate Authority (CA) to automate the certificate generation and renewal process.
Challenge: Ensuring secure access to sensitive secrets and certificates, with governance over who could manage them.
Solution: Employed Azure Role-Based Access Control (RBAC) to manage access to the Key Vault, restricting access to authorized personnel only.
Challenge: Requirement for a seamless automation of certificate renewals to avoid service disruptions and ensure secure communication over HTTPS.
Solution: Configured Azure App Service to work with Azure Key Vault, enabling automated certificate renewal without manual intervention.
Code Snippets
# Create a Key Vault in Azure
az keyvault create --name MyKeyVault --resource-group MyResourceGroup --location westeurope
# Generate a self-signed certificate and store it in Key Vault
az keyvault certificate create --vault-name MyKeyVault --name MySSLCert --policy "$(az keyvault certificate get-default-policy)"
# Set up an App Service and import the certificate from Key Vault
az webapp config ssl bind \
--resource-group MyResourceGroup \
--name MyAppServiceName \
--certificate-thumbprint <CertificateThumbprint> \
--ssl-type SNI
# Associate the custom domain with the SSL certificate
az webapp config hostname add \
--webapp-name MyAppServiceName \
--resource-group MyResourceGroup \
--hostname <domain.com>
# Automate certificate renewal
az keyvault certificate contact add \
--vault-name MyKeyVault \
--email "<admin@email.com>"
Reference Link: Use a TLS/SSL certificate in your code in Azure App Service
SSL Certificate Management and Renewal Automation
Project Overview: Set up a scalable and reliable infrastructure for an internet services company to handle traffic spikes. This included creating a load balancer, configuring virtual machines (VMs) in a scale set, and managing network security. The goal was to ensure high availability, improve performance, and automate traffic management while keeping costs low.
Technology Stack
Cloud Platform: Azure (Azure Load Balancer, Azure VM Scale Sets, Azure Virtual Network)
Networking Components: Public and internal load balancers, VNET, inbound rules, subnets
Security: CIDR block configuration, network security rules
Scaling Policy: VM Scale Set auto-scaling configuration
Global Load Balancing: Azure Front Door, Application Gateway integration
Key Achievements
Scalable Infrastructure: Created a scalable infrastructure using VM Scale Sets (VMSS) to handle fluctuating demand, allowing automatic scaling based on the load.
Load Balancing Efficiency: Implemented Azure Load Balancer and Azure Front Door to distribute incoming traffic across multiple VMs, ensuring high availability and reliability.
Improved Network Management: Configured Virtual Network (VNet), subnets, and inbound rules to streamline network traffic and secure connections.
Cost-Effective Solution: Enabled a pay-per-use strategy for VMSS, allowing One Connection Pte. Ltd. to scale down resources during low traffic, reducing unnecessary expenses.
Redundancy and Performance: By distributing the application across multiple instances and regions, ensured the application remained available and performed optimally, even under heavy load.
Challenges and Solutions
Challenge: The company needed a solution that could handle fluctuations in traffic without over-provisioning resources.
Solution: Set up Azure VMSS to automatically scale up or down based on traffic demands, optimizing resource usage and reducing costs.
Challenge: Ensuring high availability and even distribution of traffic across multiple instances.
Solution: Deployed Azure Load Balancer and Azure Front Door for load distribution and global load balancing, ensuring traffic was directed to healthy and operational VMs.
Challenge: The application required secure and well-structured networking for optimal performance and security.
Solution: Configured a Virtual Network (VNET) with properly set inbound rules and subnet configuration, optimizing network traffic and securing the environment.
Code Snippets
# Create a Public Load Balancer
az network lb create \
--resource-group MyResourceGroup \
--name MyLoadBalancer \
--sku Standard \
--frontend-ip-name MyFrontendIP \
--backend-pool-name MyBackendPool
# Create a Virtual Machine Scale Set (VMSS)
az vmss create \
--resource-group MyResourceGroup \
--name MyVMSS \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys \
--load-balancer MyLoadBalancer \
--vnet-name MyVNet \
--subnet MySubnet
# Add an inbound rule for port 80 (HTTP)
az network nsg rule create \
--resource-group MyResourceGroup \
--nsg-name MyNetworkSecurityGroup \
--name AllowHTTP \
--protocol Tcp \
--priority 1001 \
--destination-port-ranges 80
# Add a subnet to the VNET
az network vnet subnet create \
--resource-group MyResourceGroup \
--vnet-name MyVNet \
--name MySubnet \
--address-prefixes 10.0.0.0/24
# Create an Application Gateway and assign VNET/Subnet
az network application-gateway create \
--name MyAppGateway \
--resource-group MyResourceGroup \
--vnet-name MyVNet \
--subnet MyAppSubnet \
--capacity 2 \
--sku Standard_v2 \
--frontend-port 80 \
--backend-pool-name MyBackendPool
# Set up Azure Front Door for global load balancing
az network front-door create \
--resource-group MyResourceGroup \
--name MyFrontDoor \
--backend-pool-name MyBackendPool \
--frontend-endpoint-name MyFrontendEndpoint
Reference Link: Virtual Machine Scale Sets with Azure Load Balancer
Load Balancer and VMSS Setup for Scalability
Project Overview: Set up API management and integrate data from a peer-to-peer mobile payment app into a cloud-based environment. Managed data ingestion, processing, and storage for analytics. Deployed and managed containers using Kubernetes to ensure scalability and efficient data flow from APIs to databases and Databricks for real-time analytics and visualization.
Technology Stack
Cloud Platform: Azure (API Management, Event Hub, Kubernetes Service, Cosmos DB, Databricks)
Data Flow: API to Event Hub to Databricks
Databases: SQL, MySQL, Cosmos DB for data storage and integration
Visualization Tool: Databricks for data staging, transformation and visualization
Key Achievements
API Integration: Successfully integrated the mobile app API with cloud API using Azure API Management, enabling secure and efficient communication between services.
Real-time Data Processing: Deployed Event Hub to capture and stream data from the API to Databricks, allowing for real-time processing and analytics.
Scalable Container Deployment: Set up Azure Kubernetes Service (AKS) for containerized deployment, providing scalability and resilience for the mobile payment app.
Efficient Data Flow: Configured data flow from AKS to SQL/MySQL/Cosmos DB, enabling smooth data ingestion into Databricks for processing and visualization without data migration.
Advanced Security: Implemented API keys, throttling, and JWT validation in API Management to secure access to the app’s mobile infrastructure and prevent security threats.
Challenges and Solutions
Challenge: Handling large amounts of data from the app API efficiently and in real-time.
Solution: Used Event Hub to integrate and stream data into Databricks for real-time processing and analytics.
Challenge: Ensuring scalability and easy management of containerized applications.
Solution: Implemented Azure Kubernetes Service (AKS) to manage and scale containers dynamically, based on demand.
Challenge: Securely managing sensitive data and ensuring compliance.
Solution: Leveraged Cosmos DB’s enterprise-grade encryption and Azure API Management’s security policies (API keys, JWT tokens) to maintain a secure environment for data storage and API access.
Code Snippets
# Create API Management service
az apim create \
--name MyAPIService \
--resource-group MyResourceGroup \
--location eastus \
--publisher-email <email@domain> \
--publisher-name <name>
# Set up an Event Hub for data ingestion
az eventhubs namespace create \
--resource-group MyResourceGroup \
--name MyEventHubNamespace \
--location eastus
az eventhubs eventhub create \
--resource-group MyResourceGroup \
--namespace-name MyEventHubNamespace \
--name MyEventHub \
--message-retention 4 \
--partition-count 2
# Deploy Kubernetes Service (AKS)
az aks create \
--resource-group MyResourceGroup \
--name MyKubernetesCluster \
--node-count 3 \
--generate-ssh-keys
# Create Cosmos DB account and container
az cosmosdb create \
--name MyCosmosDBAccount \
--resource-group MyResourceGroup \
--kind GlobalDocumentDB \
--locations regionName=eastus failoverPriority=0 isZoneRedundant=False
az cosmosdb sql database create \
--account-name MyCosmosDBAccount \
--resource-group MyResourceGroup \
--name MyDatabase
az cosmosdb sql container create \
--account-name MyCosmosDBAccount \
--database-name MyDatabase \
--name MyContainer \
--partition-key-path /mypartitionkey
# Set up Databricks for data processing
az databricks workspace create \
--resource-group MyResourceGroup \
--name MyDatabricksWorkspace \
--location eastus
# Connect Databricks to Cosmos DB for data visualization
spark.conf.set("spark.cosmos.accountEndpoint", "https://mycosmosdbaccount.documents.azure.com:443/")
spark.conf.set("spark.cosmos.accountKey", "<your-cosmosdb-key>")
dataframe = spark.read \
.format("cosmos.oltp") \
.option("spark.cosmos.container", "MyContainer") \
.load()
# Visualize the processed data
display(dataframe)
Reference Link: Azure Container Apps hosting of Azure Functions
API Integration, Data Management, and Container Deployment for P2P Mobile Payment App
Project Overview: Set up traffic management and content distribution for a massively multiplayer online role-playing game (MMORPG). Integrated a content delivery network (CDN) with storage and deployed Azure Functions for automated notifications. Implemented Cosmos DB for user data storage and used Databricks to process and analyze game data.
Technology Stack
Cloud Platform: Azure (Traffic Manager, CDN, App Services, Cosmos DB, Databricks, Azure Functions)
Data Storage: Cosmos DB for managing user state data
Data Processing: Databricks for processing and transforming raw data
Notification Service: Azure Notification Hubs for sending push notifications to users
Traffic Management: Azure Traffic Manager for global traffic load balancing
Content Distribution: Azure CDN for delivering game assets with low latency
Key Achievements
Traffic Management Setup: Created Azure Traffic Manager to balance traffic globally and ensure high availability for game servers.
Improved Load Times: Integrated Azure CDN with Blob storage to deliver static content efficiently, reducing game load times and improving responsiveness.
User Data Storage: Deployed Cosmos DB to store and manage large volumes of user state data, ensuring high availability and low latency at a global scale.
Automated Notifications: Implemented Azure Functions to process game insights from Databricks and send real-time notifications to players through Azure Notification Hubs.
Scalable Infrastructure: Established an infrastructure that automatically scales with user demand, allowing for seamless operation during traffic surges.
Challenges and Solutions
Challenge: Reducing game load times and improving user experience.
Solution: Used Azure CDN to cache and deliver static content from locations closer to the users, significantly reducing latency.
Challenge: Handling large volumes of user data with high availability.
Solution: Deployed Cosmos DB for scalable, globally distributed data storage with low-latency performance.
Challenge: Sending real-time notifications based on processed game data.
Solution: Utilized Azure Functions to automate the process of sending notifications through Notification Hubs, triggered by insights from Databricks.
Code Snippets
# Create Traffic Manager profile
az network traffic-manager profile create \
--name MyTrafficManagerProfile \
--resource-group MyResourceGroup \
--routing-method Priority \
--unique-dns-name grimdueltrafficmanager
# Add an endpoint to Traffic Manager
az network traffic-manager endpoint create \
--resource-group MyResourceGroup \
--profile-name MyTrafficManagerProfile \
--name MyEndpoint \
--type azureEndpoints \
--target-resource-id "/subscriptions/<subscription-id>/resourceGroups/MyResourceGroup/providers/Microsoft.Web/sites/MyWebApp"
# Create a CDN profile and endpoint
az cdn profile create \
--name MyCDNProfile \
--resource-group MyResourceGroup \
--location eastus \
--sku Standard_Microsoft
az cdn endpoint create \
--name MyCDNEndpoint \
--profile-name MyCDNProfile \
--resource-group MyResourceGroup \
--origin MyStorageAccountName.blob.core.windows.net \
--origin-host-header MyStorageAccountName.blob.core.windows.net
# Create Cosmos DB account and container
az cosmosdb create \
--name MyCosmosDBAccount \
--resource-group MyResourceGroup \
--kind GlobalDocumentDB \
--locations regionName=eastus failoverPriority=0 isZoneRedundant=False
az cosmosdb sql database create \
--account-name MyCosmosDBAccount \
--resource-group MyResourceGroup \
--name MyDatabase
az cosmosdb sql container create \
--account-name MyCosmosDBAccount \
--database-name MyDatabase \
--name MyContainer \
--partition-key-path /mypartitionkey
# Set up Azure Functions for notification processing
func init MyFunctionApp --worker-runtime node
func new --name SendNotification --template "HTTP trigger"
# Inside the SendNotification function
module.exports = async function (context, req) {
context.log('Sending notification to user...');
// Send notification code
};
Reference Link: CDN Guidance
Traffic Management, CDN, and Data Processing
Project Overview: Developed a cloud-based system to enable real-time delivery tracking, in-app messaging between riders and users, and high availability of services. The solution leveraged serverless technologies and disaster recovery to ensure uninterrupted service and seamless interaction during food deliveries.
Technology Stack
Cloud Platform: Azure (BCDR, Azure Queue Storage, Front Door, Azure Functions, Service Bus, SignalR)
Disaster Recovery: Geo-redundant storage (GRS) for cross-region replication
Real-time Communication: SignalR for real-time chat functionality
Messaging Service: Azure Service Bus for managing messaging between microservices
Compute Service: Azure Functions for handling message processing and communication
Key Achievements
Real-Time Tracking: Enabled real-time delivery tracking for riders, allowing users to see live location updates without manual refresh.
In-App Messaging: Integrated SignalR to create a seamless chat room experience where riders and users can interact directly.
High Availability: Ensured app uptime and resilience using Azure Front Door and BCDR strategies, backed by SLAs.
Serverless Architecture: Used Azure Functions to simplify orchestration, manage complex workloads, and scale automatically based on demand.
Enterprise Messaging: Implemented Azure Service Bus for smooth communication between the app's services, ensuring reliable operation under fluctuating loads.
Challenges and Solutions
Challenge: Delivering continuous real-time data without manual updates.
Solution: Integrated SignalR to automatically push real-time updates to the app, providing a seamless experience.
Challenge: Managing unpredictable traffic volumes while ensuring app availability.
Solution: Implemented BCDR with geo-redundant storage and Azure Front Door to replicate data and ensure business continuity during outages.
Challenge: Implementing a scalable messaging system for in-app chat and notifications.
Solution: Deployed Azure Service Bus and Azure Functions to manage messaging queues and notifications, ensuring that messages were processed efficiently and reliably.
Code Snippets
# setup for SignalR, Service Bus integration, and Azure Functions
// Negotiate.cs - HTTP triggered function to generate SignalR access token
[FunctionName("negotiate")]
public static SignalRConnectionInfo GetSignalRInfo(
[HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req,
[SignalRConnectionInfo(HubName = "riderHub")] SignalRConnectionInfo connectionInfo)
{
return connectionInfo;
}
// message.cs - Service Bus triggered function to send a message to SignalR
[FunctionName("message")]
public static async Task Run(
[ServiceBusTrigger("riderQueue", Connection = "ServiceBusConnection")] string message,
[SignalR(HubName = "riderHub")] IAsyncCollector<SignalRMessage> signalRMessages,
ILogger log)
{
await signalRMessages.AddAsync(
new SignalRMessage
{
Target = "newMessage",
Arguments = new[] { message }
});
log.LogInformation($"Message sent to SignalR: {message}");
}
// ServiceBusQueue setup in local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "Your_Azure_Storage_Connection_String",
"ServiceBusConnection": "Your_ServiceBus_Connection_String",
"FUNCTIONS_WORKER_RUNTIME": "dotnet"
}
}
// chat.js - SignalR client to establish connection with SignalR hub
const connection = new signalR.HubConnectionBuilder()
.withUrl("http://localhost:7071/api/")
.build();
connection.on("newMessage", (message) => {
console.log("New message received: " + message);
});
connection.start().catch(err => console.error(err.toString()));
Reference Link: Distributed tracing and correlation through Service Bus messaging
Real-Time Tracking and High Availability
Profile | Portfolios | Credentials | Connect
©2022 by MP. All rights reserved.