Sunday, September 28, 2025

Learn TypeScript from Scratch to Advanced: Complete Tutorial and Interview Questions and answers

 TypeScript has become one of the most in-demand programming languages for web development. It powers frameworks like Angular, works seamlessly with React and Node.js, and is widely adopted by companies for building scalable applications. If you’re preparing for interviews or want to upgrade your JavaScript skills, this TypeScript tutorial with interview questions will take you from beginner to advanced step by step.


🔹 What is TypeScript?

TypeScript is a superset of JavaScript developed by Microsoft. It adds static typing, interfaces, generics, and object-oriented programming (OOP) concepts on top of JavaScript. The TypeScript compiler (tsc) converts TypeScript code into plain JavaScript, making it compatible with any browser or framework.

Why TypeScript?

  • Early error detection with static typing

  • Enhanced IDE support (IntelliSense, autocompletion)

  • Better maintainability for large projects

  • Supports modern ES6+ features


🔹 Setting up TypeScript

  1. Install Node.js (download from nodejs.org)

  2. Install TypeScript globally

    npm install -g typescript
  3. Check version

    tsc -v
  4. Compile TypeScript file

    tsc index.ts node index.js

🔹 TypeScript Basics (Beginner Level)

1. Data Types

let username: string = "Cherry"; let age: number = 12; let isAdmin: boolean = true; let scores: number[] = [10, 20, 30]; let tupleExample: [string, number] = ["Hasitha", 6];

2. Functions

function greet(name: string): string { return `Hello, ${name}`; } console.log(greet("CherryGPT"));

3. Interfaces

interface User { id: number; name: string; } let user: User = { id: 1, name: "Hasitha" };

4. Enums

enum Role { Admin, User, Guest } let myRole: Role = Role.Admin;

🔹 Intermediate TypeScript

1. Classes & Inheritance

class Animal { constructor(public name: string) {} speak(): void { console.log(`${this.name} makes a sound`); } } class Dog extends Animal { speak(): void { console.log(`${this.name} barks`); } } let dog = new Dog("Tommy"); dog.speak();

2. Generics

function identity<T>(value: T): T { return value; } console.log(identity<string>("Hello")); console.log(identity<number>(123));

3. Type Aliases & Union Types

type ID = number | string; let userId: ID = 101; userId = "abc123";

4. Modules & Namespaces

// mathUtils.ts export function add(a: number, b: number): number { return a + b; } // index.ts import { add } from "./mathUtils"; console.log(add(5, 10));

🔹 Advanced TypeScript Concepts

1. Advanced Types

type Person = { name: string }; type Employee = Person & { salary: number }; let emp: Employee = { name: "Cherry", salary: 50000 };

2. Decorators (used in Angular)

function Logger(target: any) { console.log("Logging...", target); } @Logger class TestClass {}

3. Type Guards

function printId(id: string | number) { if (typeof id === "string") { console.log("String ID:", id.toUpperCase()); } else { console.log("Number ID:", id); } }

4. Utility Types

interface Todo { title: string; description: string; completed: boolean; } type PartialTodo = Partial<Todo>; type ReadonlyTodo = Readonly<Todo>;

🔹 TypeScript Best Practices

  • Always define types or interfaces

  • Use strict mode in tsconfig.json

  • Prefer readonly and private where possible

  • Keep functions pure and modular

  • Use Enums/Constants instead of magic numbers


🔹 TypeScript Interview Questions and Answers

Beginner Level

Q1: What is TypeScript and how is it different from JavaScript?
👉 TypeScript is a superset of JavaScript that adds type safety, interfaces, generics, and OOP features. Unlike JavaScript, TypeScript code needs to be compiled into JS.

Q2: What are the basic data types in TypeScript?
👉 string, number, boolean, null, undefined, tuple, enum, any, void, unknown.

Q3: What is an interface in TypeScript?
👉 An interface defines the structure of an object. It enforces contracts between code.


Intermediate Level

Q4: What are Generics in TypeScript?
👉 Generics allow writing reusable functions and classes that work with multiple types. Example:

function identity<T>(arg: T): T { return arg; }

Q5: Difference between type and interface in TypeScript?
👉 Both define object shapes, but type can represent unions, primitives, and mapped types, whereas interface is best for object contracts and can be extended multiple times.

Q6: What is the difference between unknown and any?
👉 any disables type checking completely. unknown is safer; you must check its type before using it.


Advanced Level

Q7: Explain Decorators in TypeScript.
👉 Decorators are special functions that modify classes, methods, or properties. They are heavily used in Angular for metadata annotations.

Q8: What are Utility Types in TypeScript?
👉 Predefined types like Partial<T>, Pick<T>, Readonly<T>, Record<K,T> that help in transforming object types.

Q9: How does TypeScript improve large-scale application development?
👉 By enforcing type safety, modularization, OOP principles, and preventing runtime errors, making code maintainable and scalable.

✅ Conclusion

TypeScript is not just an extension of JavaScript—it’s a game-changer for modern development. By learning the fundamentals, moving into advanced topics, and preparing with interview questions, you can become a confident TypeScript developer ready for real-world projects and interviews.

Saturday, September 27, 2025

Integrate AI/ML models using services Azure Cognitive Services, OpenAI, or custom models

 This means you can add Artificial Intelligence (AI) and Machine Learning (ML) capabilities to your application in three main ways. Let’s detail each one with examples, use cases, advantages, and considerations.


1. Azure Cognitive Services

Azure Cognitive Services is Microsoft’s suite of ready-made AI APIs that developers can easily plug into their apps without building or training models from scratch.

🔹 Examples of Services

  • Vision → Image recognition, OCR (text from images), facial recognition, object detection.

  • Speech → Speech-to-text, text-to-speech, speech translation.

  • Language → Sentiment analysis, translation, text analytics, QnA Maker.

  • Decision → Personalizer (recommendation system), anomaly detector.

🔹 How Integration Works

  • You call REST APIs or use SDKs (C#, Python, Java, etc.).

  • Example: Upload an image → API returns labels like “dog, grass, outdoor”.

var client = new ComputerVisionClient( new ApiKeyServiceClientCredentials("<API_KEY>") ) { Endpoint = "<ENDPOINT>" }; var result = await client.AnalyzeImageAsync(imageUrl, new List<VisualFeatureTypes?> { VisualFeatureTypes.Tags });

✅ Advantages

  • No ML expertise needed.

  • Scalable and secure (hosted on Azure).

  • Pre-trained on massive datasets.

⚠️ Considerations

  • Limited customization.

  • Pay per API call → cost grows with usage.


2. OpenAI (like ChatGPT, GPT models, DALL·E, Whisper)

OpenAI provides powerful foundation AI models that can be integrated for natural language understanding, text generation, image creation, and more.

🔹 Examples

  • GPT models (ChatGPT, GPT-4, GPT-3.5, GPT-5) → Text generation, Q&A, summarization, code assistance.

  • DALL·E → Image generation from text prompts.

  • Whisper → Speech-to-text transcription.

🔹 How Integration Works

  • Use Azure OpenAI Service (enterprise version) or OpenAI API directly.

  • Example: Asking GPT to summarize a customer support ticket.

import openai openai.api_key = "YOUR_API_KEY" response = openai.ChatCompletion.create( model="gpt-4", messages=[ {"role": "system", "content": "You are a support assistant."}, {"role": "user", "content": "Summarize the following ticket: ..."} ] ) print(response['choices'][0]['message']['content'])

✅ Advantages

  • State-of-the-art AI (very flexible).

  • Can be fine-tuned for specific domains.

  • Supports multiple modalities (text, image, audio).

⚠️ Considerations

  • Requires careful prompt engineering for best results.

  • Sensitive to input phrasing (outputs may vary).

  • Pricing is token-based (input + output length).


3. Custom Models (Train Your Own ML/AI Models)

This approach means you build your own machine learning models when pre-built services don’t fit your needs.

🔹 Steps to Build Custom Models

  1. Data Collection → Gather training datasets (structured, images, text, etc.).

  2. Model Training → Use frameworks like TensorFlow, PyTorch, Scikit-learn, or Azure Machine Learning.

  3. Deployment → Package as a REST API (Flask, FastAPI, or Azure ML deployment).

  4. Integration → Application calls your model API just like Cognitive Services.

🔹 Example

Custom fraud detection in a banking app:

  • Train a classification model with transaction data.

  • Deploy on Azure Machine Learning or Azure Kubernetes Service (AKS).

  • Application sends transaction data → Model predicts fraud probability.

from fastapi import FastAPI import joblib app = FastAPI() model = joblib.load("fraud_model.pkl") @app.post("/predict") def predict(transaction: dict): features = [transaction['amount'], transaction['location'], transaction['device']] prediction = model.predict([features]) return {"fraud": bool(prediction[0])}

✅ Advantages

  • Full customization.

  • Can train on your proprietary data.

  • Better suited for domain-specific use cases (healthcare, finance, retail).

⚠️ Considerations

  • Requires ML expertise (data science + model training).

  • Need infrastructure to train and host models.

  • Higher time and cost investment.


🔑 Quick Comparison

ApproachBest ForEffort LevelCustomizationExamples
Azure Cognitive ServicesQuick AI integration, standard use casesLowLowOCR, sentiment analysis
OpenAIConversational AI, creative tasksMediumMedium (via fine-tuning, prompts)Chatbots, summarizers
Custom ModelsDomain-specific solutionsHighHighFraud detection, medical AI

👉 So in practice:

  • If you want fast integration → Use Azure Cognitive Services.

  • If you need natural language or generative AI → Use OpenAI.

  • If you need business-specific intelligence → Build Custom Models.

Azure Cognitive Services — Step-by-step guide with a real-time example (for .NET Core + Angular)

 Short summary: This article explains what Azure Cognitive Services (now part of Azure AI services) is, shows the service families, gives a step-by-step setup and integration guide, and walks through a concrete real-time example: live speech → transcription → sentiment analysis using the Speech SDK + Text Analytics in a .NET Core app. Code samples and production best practices are included so you can copy this into a blog post. Microsoft Azure


What is Azure Cognitive Services (Azure AI services)?

Azure Cognitive Services (now presented as Azure AI services) is a set of cloud-hosted, pre-built and customizable AI APIs that let developers add vision, speech, language, decisioning and search capabilities into apps without building complex ML models from scratch. Services are available as REST APIs and native SDKs for common languages. Use cases range from OCR and face/object detection, to speech-to-text, text understanding and personalizers/recommenders. Microsoft Azure+1


Service families (quick overview)

  • Vision — image analysis, OCR (Read API), custom vision, object detection. Microsoft Learn

  • Speech — real-time and batch speech-to-text, text-to-speech, speech translation, speaker recognition. Microsoft Learn

  • Language / Text (Text Analytics) — sentiment analysis, key phrase extraction, named entity recognition, summarization, custom classification. Microsoft Learn

  • Decision & Search — recommendations, anomaly detection, and Azure AI Search for “chat with your data” scenarios. Microsoft Learn


Why use Azure Cognitive Services?

  • Fast time-to-market: pre-trained models that work out of the box. Microsoft Azure

  • Scalable & managed: Microsoft hosts and manages the infrastructure. Microsoft Learn

  • Multiple access patterns: REST + SDKs + streaming SDKs (for real-time audio). Microsoft Learn+1


Step-by-step: getting started (high level)

Prerequisites

  • Azure subscription (free tier available for initial experiments).

  • .NET 6/7 SDK and Visual Studio / VS Code (for the .NET sample).

  • A microphone (for the live speech demo) and basic familiarity with Azure Portal or Azure CLI.

Step 1 — Create an Azure AI / Cognitive Services resource

  1. In the Azure Portal click Create a resource → AI + Machine Learning → Azure AI services / Cognitive Services (or use the new Azure AI multi-service resource).

  2. Choose region, pricing tier (e.g., S0 or free if available), and resource group.

  3. After deployment, you’ll have an endpoint and keys you can use to call APIs. (You can also create resources programmatically with az cognitiveservices account create — see docs for full CLI parameters). Microsoft Learn+1

Tip: for production consider creating individual resources for heavy workloads (e.g., Speech in one region close to your users), or use the multi-service resource if you want one credential for multiple services. Microsoft Learn

Step 2 — Get keys and endpoint (or configure Azure AD)

  • From your resource in the portal, open Keys and Endpoint; copy the key and endpoint to your app's configuration (or store them in Azure Key Vault). Alternatively, prefer Azure AD / managed identity (keyless) authentication for production to avoid long-lived keys. Microsoft Learn+1

Step 3 — Choose SDK (recommended for streaming) or REST (good for simple requests)

  • Streaming / real-time audio: use the Speech SDK (native support for microphone streaming). Microsoft Learn

  • Text NLP tasks: use the Text Analytics / Language SDKs (Azure.AI.TextAnalytics or the newer language SDKs) or REST. Microsoft Learn+1

Step 4 — Install SDKs (example .NET)

dotnet add package Microsoft.CognitiveServices.Speech dotnet add package Azure.AI.TextAnalytics

(These package names are the official NuGet packages for Speech and Text Analytics.) NuGet+1


Real-time example: Live speech → transcription → sentiment analysis

Goal: capture live audio (microphone), transcribe it in real time, and run sentiment on each recognized utterance — useful for customer support dashboards, call monitoring, or live captions with emotion/sentiment tagging.

Architecture (simple):
Browser (Angular) → microphone capture (WebRTC or browser media) → upload/stream audio to Backend (.NET Core) → backend uses Speech SDK for low-latency transcription → send transcript to Text Analytics for sentiment → results saved/displayed on UI.

For demo simplicity we'll show a .NET console app performing real-time local microphone transcription + sentiment. In a real app you’d move the logic into your backend Web API and stream audio from frontend.

Environment variables (set these before running)

  • AZURE_SPEECH_KEY — Speech resource key

  • AZURE_SPEECH_REGION — Speech resource region (e.g., eastus)

  • AZURE_TEXT_KEY — Text Analytics key

  • AZURE_TEXT_ENDPOINT — Text Analytics endpoint (e.g., https://<your-resource>.cognitiveservices.azure.com/)

Minimal C# sample (real-time transcription → sentiment)

// Program.cs using System; using System.Threading.Tasks; using Microsoft.CognitiveServices.Speech; using Azure; using Azure.AI.TextAnalytics; class Program { static async Task Main() { // Read credentials from environment (or use secure configuration / managed identity in prod) var speechKey = Environment.GetEnvironmentVariable("AZURE_SPEECH_KEY"); var speechRegion = Environment.GetEnvironmentVariable("AZURE_SPEECH_REGION"); var textKey = Environment.GetEnvironmentVariable("AZURE_TEXT_KEY"); var textEndpoint = Environment.GetEnvironmentVariable("AZURE_TEXT_ENDPOINT"); if (string.IsNullOrEmpty(speechKey) || string.IsNullOrEmpty(speechRegion) || string.IsNullOrEmpty(textKey) || string.IsNullOrEmpty(textEndpoint)) { Console.WriteLine("Set AZURE_SPEECH_KEY, AZURE_SPEECH_REGION, AZURE_TEXT_KEY and AZURE_TEXT_ENDPOINT."); return; } var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion); using var recognizer = new SpeechRecognizer(speechConfig); var textClient = new TextAnalyticsClient(new Uri(textEndpoint), new AzureKeyCredential(textKey)); recognizer.Recognized += async (s, e) => { // e.Result may be partial or final depending on SDK/events; check the Reason if (e.Result.Reason == ResultReason.RecognizedSpeech) { var transcript = e.Result.Text; Console.WriteLine($"Transcript: {transcript}"); // Call Text Analytics sentiment API var sentimentResponse = await textClient.AnalyzeSentimentAsync(transcript); var sentiment = sentimentResponse.Value; Console.WriteLine($"Sentiment: {sentiment.Sentiment} (pos: {sentiment.ConfidenceScores.Positive:0.00}, " + $"neu: {sentiment.ConfidenceScores.Neutral:0.00}, neg: {sentiment.ConfidenceScores.Negative:0.00})"); } else if (e.Result.Reason == ResultReason.NoMatch) { Console.WriteLine("No speech could be recognized."); } }; recognizer.Canceled += (s, e) => { Console.WriteLine($"Recognition canceled: {e.Reason}. ErrorDetails: {e.ErrorDetails}"); }; await recognizer.StartContinuousRecognitionAsync(); Console.WriteLine("Listening — press ENTER to stop."); Console.ReadLine(); await recognizer.StopContinuousRecognitionAsync(); } }

This sample uses the Speech SDK for microphone streaming and the Azure.AI.TextAnalytics client for sentiment analysis. For more advanced control see the Speech SDK quickstarts which cover diarization, language detection and low-latency streaming. Microsoft Learn+1


Production considerations & best practices

Authentication & security

  • Prefer managed identities / Azure AD (keyless) for production to avoid embedding keys; use Azure Key Vault for stored secrets if needed. Many AI services support Microsoft Entra authentication and managed identities. Microsoft Learn+1

Scaling & architecture

  • For many concurrent audio streams, move recognition to an autoscaled backend (AKS / Azure Functions) and use message queues (Service Bus) to decouple ingestion from processing.

  • Batch text analytics calls when possible to reduce per-call overhead (Text Analytics supports batch input).

Cost & throttling

  • Monitor usage and enable quotas / alerts. Speech transcription and text analytics are billed per audio minute / text transactions — design batching and sampling accordingly. Refer to the pricing pages for the service you use. Microsoft Azure

Privacy & compliance

  • If you process PII or health data, confirm the service’s compliance and region choices (Azure publishes certifications and regional availability). Configure data retention and use private network options as needed. Microsoft Learn

Reliability

  • Add retry policies and exponential backoff for transient network/API errors. Use SDK built-in retry policies where available.

  • Implement fallbacks (e.g., if speech streaming fails, fall back to short file uploads + batch transcription).


Troubleshooting tips

  • No audio / no match — check microphone permissions, supported audio format, and sample rate.

  • Poor transcription — try appropriate language model, set the SpeechConfig.SpeechRecognitionLanguage, or use custom speech models for domain language. Microsoft Learn

  • Auth errors — ensure key/endpoint match resource region and that managed identity has proper role assignments if using Azure AD. Microsoft Learn


Resources & further reading

Friday, September 26, 2025

Direct HTTP/REST Communication Between Microservices

 1. Direct HTTP/REST Communication Between Microservices

  • Each microservice exposes HTTP APIs.

  • Other services call those APIs directly when needed.

  • Example (Service A calls Service B):

// In Service A var client = new HttpClient(); var response = await client.GetAsync("https://serviceB/api/data"); var data = await response.Content.ReadAsStringAsync();

Pros:

  • Simple to implement.

  • No extra infrastructure required.

Cons:

  • Tight coupling: Service A depends on Service B being up.

  • Latency: Each call adds network overhead.

  • Harder to scale with many services.


2. Shared Database Communication

  • Microservices write events or data to a shared database table.

  • Other microservices poll or read changes periodically.

  • Example:

    • Service A writes a “TaskCompleted” row in Events table.

    • Service B periodically checks for new rows and processes them.

Pros:

  • No external message broker needed.

  • Simple for small systems.

Cons:

  • Polling can waste resources.

  • Hard to scale and maintain.

  • Not truly event-driven (eventual consistency).


3. In-Memory Event Bus

  • If microservices run in same process or container, you can implement an in-memory event bus.

  • Example: Using Mediator pattern in .NET:

public interface IEvent {} public class OrderCreatedEvent : IEvent { public int OrderId { get; set; } } public class EventBus { private readonly List<Action<IEvent>> _handlers = new(); public void Subscribe(Action<IEvent> handler) => _handlers.Add(handler); public void Publish(IEvent @event) { foreach(var handler in _handlers) handler(@event); } }

Pros:

  • Very fast.

  • No infrastructure required.

Cons:

  • Works only within a single application/process.

  • Cannot scale across multiple servers.


4. File-Based or Shared Memory

  • Services write messages to a shared file or memory.

  • Other services monitor the file or memory to read messages.

  • Rarely used; mostly for legacy or highly constrained environments.

Cons:

  • Complex, error-prone.

  • Not scalable.


Key Points When Avoiding Brokers

  1. Synchronous REST calls are simplest but couple services.

  2. Shared DB or in-memory events can simulate async communication but have limitations.

  3. For distributed, scalable systems, eventually you’ll hit limitations without a broker.


💡 Tip:
If you really want no external tool, the most practical approach is a REST API + polling or callback mechanism. You can even implement a custom lightweight event bus over HTTP for async messaging between microservices.

Don't Copy

Protected by Copyscape Online Plagiarism Checker

Pages