Thursday, September 25, 2025

How to achieve state management in Angular — a complete guide with examples

 State is the data that drives your UI: user input, server data, UI selections, etc. In Angular apps — especially as they grow — managing that state so the UI stays predictable, testable, and fast becomes essential.

This article walks through practical approaches to state management in Angular, shows complete code examples (simple and advanced), and gives recommendations for which approach to use for different scenarios.

Table of contents

  1. What is application state and why manage it?

  2. Approaches at a glance

  3. Local component state (when it's enough)

  4. Parent–child communication with @Input / @Output

  5. Shared services + RxJS (recommended for small–medium apps) — complete todo example

  6. NgRx (Redux-style) for large apps — complete todo example (actions, reducer, selectors, effects)

  7. Comparison: choose the right approach

  8. Best practices & performance tips

  9. Testing tips

  10. Migration notes & closing


1 — What is application state and why manage it?

State = any data your UI depends on (lists, selected item, user session, forms, loading flags, etc.).
Why manage it explicitly?

  • Prevent inconsistent UI (two components disagreeing on the same data).

  • Make the app predictable and easier to debug/test.

  • Improve performance by avoiding unnecessary re-rendering.

  • Keep code maintainable as the app grows.


2 — Approaches at a glance

  • Local component state: simplest; use when state doesn't leave the component.

  • @Input / @Output: parent-child communication for small hierarchies.

  • Shared services + RxJS (BehaviorSubject / ReplaySubject): lightweight reactive store — great for many apps.

  • NgRx (Redux pattern): single immutable store with actions/reducers/selectors/effects — best for complex enterprise apps.

  • Other libraries: NGXS, Akita, or NgRx Component Store (for local feature stores).


3 — Local component state (when it's enough)

If your state only lives in one component, keep it there — simple and fast.

@Component({ selector: 'app-counter', template: ` < p > Count: { { count } }</ p > < button(click) = "increment()" > +</ button > ` }) export class CounterComponent { count = 0; increment() { this.count++; } }

Use this only when data doesn't need to be shared.


4 — Parent–child via @Input / @Output

Good for simple data flow between parent/child components. Avoid using this to pass data across many nesting levels.

// child.component.ts @Input() name!: string; @Output() saved = new EventEmitter<string>();

5 — Shared services + RxJS — the practical, reactive approach

When to use: medium apps, multiple components need the same data, or you want a reactive model without the complexity of NgRx.

Key idea

Create a singleton service that holds the state as RxJS streams (e.g., BehaviorSubject) and expose Observables to components. Components subscribe (usually via async pipe) and call service methods to update the state.

Example: Todo app using BehaviorSubject

Model

// todo.model.ts export interface Todo { id: number; title: string; completed: boolean; }

Service

// todo.service.ts import { Injectable } from '@angular/core'; import { BehaviorSubject, of } from 'rxjs'; import { delay } from 'rxjs/operators'; import { Todo } from './todo.model'; @Injectable({ providedIn: 'root' }) export class TodoService { private todosSubject = new BehaviorSubject<Todo[]>([]); readonly todos$ = this.todosSubject.asObservable(); private idCounter = 1; // Add a new todo add(title: string) { const newTodo: Todo = { id: this.idCounter++, title, completed: false }; this.todosSubject.next([...this.todosSubject.value, newTodo]); } // Toggle completed toggle(id: number) { const updated = this.todosSubject.value.map(t => t.id === id ? { ...t, completed: !t.completed } : t ); this.todosSubject.next(updated); } // Remove remove(id: number) { this.todosSubject.next(this.todosSubject.value.filter(t => t.id !== id)); } // Simulate loading from an API loadFromServer() { of<Todo[]>([ { id: 101, title: 'Use Angular', completed: false }, { id: 102, title: 'Write blog', completed: true } ]) .pipe(delay(800)) .subscribe(data => this.todosSubject.next(data)); } }

Component (presentation + binding)

// todo-list.component.ts import { Component, ChangeDetectionStrategy } from '@angular/core'; import { TodoService } from './todo.service'; @Component({ selector: 'app-todo-list', templateUrl: './todo-list.component.html', changeDetection: ChangeDetectionStrategy.OnPush // important for perf }) export class TodoListComponent { todos$ = this.todoService.todos$; newTitle = ''; constructor(private todoService: TodoService) {} add() { const title = this.newTitle.trim(); if (!title) return; this.todoService.add(title); this.newTitle = ''; } toggle(id: number) { this.todoService.toggle(id); } remove(id: number) { this.todoService.remove(id); } load() { this.todoService.loadFromServer(); } }

Template

<!-- todo-list.component.html --> <div> <input [(ngModel)]="newTitle" placeholder="New todo" /> <button (click)="add()">Add</button> <button (click)="load()">Load sample</button> </div> <ul> <li *ngFor="let todo of todos$ | async; trackBy: trackById"> <label> <input type="checkbox" [checked]="todo.completed" (change)="toggle(todo.id)" /> <span [class.completed]="todo.completed">{{ todo.title }}</span> </label> <button (click)="remove(todo.id)">Delete</button> </li> </ul>

Performance tips:

  • Use ChangeDetectionStrategy.OnPush.

  • Use trackBy in *ngFor.

  • Expose Observables and consume via async pipe — avoids manual subscriptions/unsubscriptions.

This pattern scales well for many apps, is easy to test, and keeps code readable.


6 — NgRx (Redux pattern) — for large, complex apps

When to use: large enterprise apps, many teams, complex UI flows, need time-travel debugging, want single source of truth.

NgRx implements a Redux-style architecture:

  • Actions — describe events (user or server).

  • Reducer — pure function that produces new state from previous state + action.

  • Store — holds the app state tree.

  • Selectors — memoized queries into the store.

  • Effects — handle side effects (HTTP, other async operations).

Below is a compact but full NgRx example for the same Todo functionality.

Actions

// todo.actions.ts import { createAction, props } from '@ngrx/store'; import { Todo } from './todo.model'; export const loadTodos = createAction('[Todo] Load Todos'); export const loadTodosSuccess = createAction('[Todo] Load Todos Success', props<{ todos: Todo[] }>()); export const loadTodosFailure = createAction('[Todo] Load Todos Failure', props<{ error: any }>()); export const addTodo = createAction('[Todo] Add', props<{ title: string }>()); export const toggleTodo = createAction('[Todo] Toggle', props<{ id: number }>()); export const removeTodo = createAction('[Todo] Remove', props<{ id: number }>());

State & Reducer

// todo.reducer.ts import { createReducer, on } from '@ngrx/store'; import * as TodoActions from './todo.actions'; import { Todo } from './todo.model'; export interface TodoState { todos: Todo[]; loading: boolean; error: string | null; } export const initialState: TodoState = { todos: [], loading: false, error: null }; export const todoReducer = createReducer( initialState, on(TodoActions.loadTodos, state => ({ ...state, loading: true })), on(TodoActions.loadTodosSuccess, (state, { todos }) => ({ ...state, loading: false, todos })), on(TodoActions.loadTodosFailure, (state, { error }) => ({ ...state, loading: false, error })), on(TodoActions.addTodo, (state, { title }) => { const newTodo: Todo = { id: Date.now(), title, completed: false }; return { ...state, todos: [...state.todos, newTodo] }; }), on(TodoActions.toggleTodo, (state, { id }) => ({ ...state, todos: state.todos.map(t => (t.id === id ? { ...t, completed: !t.completed } : t)) })), on(TodoActions.removeTodo, (state, { id }) => ({ ...state, todos: state.todos.filter(t => t.id !== id) })) );

Selectors

// todo.selectors.ts import { createFeatureSelector, createSelector } from '@ngrx/store'; import { TodoState } from './todo.reducer'; const selectTodoFeature = createFeatureSelector<TodoState>('todos'); export const selectAllTodos = createSelector(selectTodoFeature, s => s.todos); export const selectLoading = createSelector(selectTodoFeature, s => s.loading); export const selectError = createSelector(selectTodoFeature, s => s.error);

Effects (handle async loading)

// todo.effects.ts import { Injectable } from '@angular/core'; import { Actions, createEffect, ofType } from '@ngrx/effects'; import * as TodoActions from './todo.actions'; import { HttpClient } from '@angular/common/http'; import { catchError, map, mergeMap, of } from 'rxjs'; @Injectable() export class TodoEffects { loadTodos$ = createEffect(() => this.actions$.pipe( ofType(TodoActions.loadTodos), mergeMap(() => this.http.get<any[]>('/api/todos').pipe( map(todos => TodoActions.loadTodosSuccess({ todos })), catchError(error => of(TodoActions.loadTodosFailure({ error }))) ) ) ) ); constructor(private actions$: Actions, private http: HttpClient) {} }

Module setup

// app.module.ts (imports snippet) import { StoreModule } from '@ngrx/store'; import { EffectsModule } from '@ngrx/effects'; import { todoReducer } from './state/todo.reducer'; import { TodoEffects } from './state/todo.effects'; import { StoreDevtoolsModule } from '@ngrx/store-devtools'; @NgModule({ imports: [ // ... StoreModule.forRoot({ todos: todoReducer }), EffectsModule.forRoot([TodoEffects]), StoreDevtoolsModule.instrument({ maxAge: 25 }) // optional, for debugging ], // ... }) export class AppModule {}

Component using the store

// todo-list.component.ts (NgRx version) export class TodoListComponent { todos$ = this.store.select(selectAllTodos); loading$ = this.store.select(selectLoading); constructor(private store: Store) {} ngOnInit() { this.store.dispatch(loadTodos()); } add(title: string) { this.store.dispatch(addTodo({ title })); } toggle(id: number) { this.store.dispatch(toggleTodo({ id })); } remove(id: number) { this.store.dispatch(removeTodo({ id })); } }

Notes:

  • NgRx encourages immutability and pure reducers.

  • Use selectors to memoize and compute derived data.

  • Effects are the recommended place for network calls and side effects.

  • NgRx has optional helpers like createEntityAdapter() for normalized collections.


7 — Comparison: which approach when?

ScenarioRecommended approach
Small component-only stateLocal component state
Parent-child simple sharing@Input / @Output
Many components, app-level shared state (small–medium)Service + RxJS (BehaviorSubject)
Complex app, many features, many teams, need strict patterns & devtoolsNgRx (or NGXS / Akita)
Feature-local store inside a component treeNgRx Component Store or Akita feature stores

8 — Best practices & performance tips

  • Prefer streams: Expose Observables from services and components; consume with async pipe.

  • OnPush change detection: Use for UI components showing Observable data.

  • Avoid storing derived state: Compute derived values with selectors or pipes.

  • Normalize big collections: Use entity adapters or normalized shapes (by id) to keep updates cheap.

  • Use trackBy with *ngFor.

  • Keep reducers pure and avoid side effects in reducers (use effects/services).

  • Lazy load feature state in NgRx to reduce initial bundle size.

  • Use devtools when using NgRx (@ngrx/store-devtools) for time-travel debugging.

  • Memoize selectors to avoid unnecessary recomputation.


9 — Testing tips

  • Services + BehaviorSubject: Test the service by subscribing to todos$ and asserting emissions after calling add()/remove()/toggle(). No need to mock Angular DI — the service is plain TS with RxJS.

  • NgRx:

    • Test reducers as pure functions (send action, assert new state).

    • Test effects with provideMockActions and mock HTTP calls.

    • Use provideMockStore for component unit tests to mock store selects and dispatches.

Example reducer test (Jasmine):

it('should add todo on addTodo action', () => { const initial = { todos: [], loading: false, error: null }; const action = addTodo({ title: 'New' }); const result = todoReducer(initial, action); expect(result.todos.length).toBe(1); expect(result.todos[0].title).toBe('New'); });

10 — Migration notes & closing

  • Start simple: adopt services + RxJS. Move to NgRx only when complexity/teams justify it.

  • Hybrid approach: many apps use a combination — NgRx for global domain state and services/component-store for local UI state.

  • Refactor incrementally: extract a feature into NgRx gradually (actions/reducer/selectors) instead of a big rewrite.


Final checklist for implementing state in Angular

  • Decide scope: local vs shared vs global.

  • Prefer immutable updates (spread operator).

  • Use Observables + async pipe.

  • Optimize change detection with OnPush.

  • Add unit tests for services, reducers, and effects.

  • For large apps, use NgRx (and devtools); for smaller apps, prefer services + RxJS.



🔹 .NET Framework vs .NET Core

 1. Introduction

  • .NET Framework

    • Released in 2002 by Microsoft.

    • Runs only on Windows.

    • Used for building desktop apps (WinForms, WPF), ASP.NET Web Forms/MVC, enterprise solutions, etc.

    • Mature and widely used, but Windows-only and no active feature development (only security fixes now).

  • .NET Core

    • Released in 2016 as a cross-platform, open-source re-implementation of .NET.

    • Runs on Windows, Linux, macOS.

    • Supports modern development (cloud, microservices, Docker, Kubernetes).

    • Actively developed and evolved into .NET 5+ (Unified .NET platform).


2. Key Differences

Feature.NET Framework.NET Core
Platform SupportWindows-onlyCross-platform (Windows, Linux, macOS)
DeploymentInstalled on Windows systemSelf-contained (bundled with app) or framework-dependent
PerformanceGood, but older runtimeHigh performance, optimized runtime (Kestrel web server)
Open SourceMostly closed sourceFully open source (on GitHub)
Application TypesWinForms, WPF, ASP.NET Web Forms/MVC, WCFASP.NET Core (MVC, Razor Pages, Blazor), Console, Microservices
Cross-Platform Development❌ No✅ Yes
Microservices SupportLimitedExcellent (Docker + Kubernetes ready)
Cloud Support (Azure, AWS)Supported but heavierOptimized for cloud-native
FutureMaintenance only (no major new features)Active development (merged into .NET 5, .NET 6, .NET 7, .NET 8, .NET 9 …)

3. Architecture Differences

.NET Framework:

  • Runs only with Windows OS + IIS (Internet Information Services) for hosting web apps.

  • Traditional monolithic applications.

.NET Core:

  • Uses Kestrel Web Server (lightweight, cross-platform).

  • Can be hosted in IIS, Nginx, Apache, or Docker containers.

  • Supports microservices architecture.


4. Performance

  • .NET Core apps are generally faster because of:

    • JIT (RyuJIT improvements)

    • Lightweight runtime

    • Optimized memory management

    • Kestrel (high-performance web server)

Example: ASP.NET Core can handle millions of requests/sec, while classic ASP.NET Framework is slower.


5. Deployment

  • .NET Framework: Installed on Windows; app depends on that system installation.

  • .NET Core:

    • Framework-dependent deployment (FDD): App runs on shared runtime.

    • Self-contained deployment (SCD): App carries its own runtime — no need to install .NET separately.


6. Ecosystem & Future

  • .NET Framework → Legacy projects, Windows desktop apps (still relevant for enterprises).

  • .NET Core → Foundation for modern apps.

  • In 2020, Microsoft merged everything into a unified platform called .NET 5+ (now we are at .NET 9).

    • .NET Framework will never get new features.

    • .NET Core is the future path.


7. When to Use

  • ✅ Use .NET Framework if:

    • You are maintaining an existing Windows-only enterprise app.

    • You need technologies not yet available in .NET Core (like full WCF or Web Forms).

  • ✅ Use .NET Core / .NET 5+ if:

    • You’re building new applications (web, microservices, cloud-native).

    • You want cross-platform support.

    • You care about performance and scalability.

    • You plan to use Docker, Kubernetes, or cloud hosting.


🔑 Quick Summary

  • .NET Framework = Old, Windows-only, legacy support.

  • .NET Core = Modern, cross-platform, high-performance, future-ready (part of .NET 5+).



Wednesday, September 24, 2025

SAGA Design Pattern in Microservices: Ensuring Data Consistency with Real-Time Examples

 In today’s cloud-native world, microservices architecture has become the backbone for building scalable, distributed, and independent applications. However, one of the biggest challenges with microservices is maintaining data consistency across multiple services. Unlike monolithic applications, where a single transaction spans the entire system, microservices are decentralized, often using their own databases.

This is where the SAGA design pattern comes into play. It provides a robust mechanism to maintain data consistency in microservices while ensuring fault tolerance and reliability.


What is the SAGA Design Pattern?

The SAGA pattern is a distributed transaction management mechanism that coordinates a long-running business process across multiple microservices. Instead of using a traditional two-phase commit (which is complex and not scalable), SAGA breaks a business transaction into a series of local transactions.

  • Each local transaction updates data within one service and publishes an event or message to trigger the next step.

  • If one of the local transactions fails, SAGA executes compensating transactions to undo the changes made by previous services, ensuring the system returns to a consistent state.

👉 In short: SAGA = Sequence of Local Transactions + Compensation for Failures


Why Do We Need SAGA in Microservices?

  • Decentralized Databases – Each microservice manages its own database. Traditional database-level transactions (ACID) don’t work across distributed services.

  • Eventual Consistency – SAGA ensures eventual consistency instead of strong consistency, which is more practical for microservices.

  • Failure Handling – Provides compensating actions when something goes wrong.

  • Scalability – Avoids heavy locking and blocking mechanisms of distributed transactions.


Types of SAGA Design Patterns

There are two main approaches to implement SAGA:

1. Choreography-Based SAGA (Event-Driven)

  • Each service performs a local transaction and then publishes an event.

  • Other services listen to that event and take action accordingly.

  • There is no central controller.

  • Best for simple workflows.

Example Flow:

  1. Order Service creates an order → publishes OrderCreated event.

  2. Payment Service listens, processes payment → publishes PaymentCompleted.

  3. Inventory Service listens, reserves items → publishes InventoryReserved.

  4. If Inventory fails, it publishes InventoryFailed, triggering Payment Service to issue a refund and Order Service to cancel the order.


2. Orchestration-Based SAGA

  • A central orchestrator (SAGA orchestrator) coordinates the flow.

  • The orchestrator decides which service to call next and handles failures.

  • Best for complex workflows.

Example Flow:

  1. Order Service sends request to SAGA Orchestrator.

  2. Orchestrator calls Payment Service.

  3. After success, Orchestrator calls Inventory Service.

  4. If Inventory fails, Orchestrator calls Payment Service for refund and Order Service for cancellation.


Real-Time Example: E-Commerce Order Processing

Let’s consider a place order scenario in an online shopping system built using microservices:

  1. Order Service → Creates an order and initiates the workflow.

  2. Payment Service → Charges the customer’s credit card.

  3. Inventory Service → Reserves the items from stock.

  4. Shipping Service → Prepares the package for delivery.

Now, imagine the Inventory Service fails (e.g., product out of stock).

  • In a Choreography Saga, the InventoryFailed event triggers compensating actions:

    • Payment Service issues a refund.

    • Order Service cancels the order.

  • In an Orchestration Saga, the orchestrator detects the failure and explicitly calls the compensating transactions in Payment and Order services.

Thus, the system remains consistent, even in the event of partial failures.


Benefits of the SAGA Design Pattern

Ensures Data Consistency – Provides eventual consistency without relying on distributed transactions.
Improves Fault Tolerance – Handles failures gracefully using compensating transactions.
Scalable & Lightweight – No need for locking across services.
Flexible Approaches – Choose Choreography for simple flows and Orchestration for complex workflows.


Challenges of SAGA Pattern

⚠️ Complex Compensations – Writing compensating transactions can be tricky.
⚠️ Event Storming – Choreography may lead to a flood of events in large systems.
⚠️ Debugging Difficulty – Tracing distributed transactions across microservices can be harder.
⚠️ Eventual Consistency – Not real-time consistency; developers must design the system with this in mind.


Best Practices for Implementing SAGA

  • Use idempotent transactions (executing the same action multiple times should not cause issues).

  • Implement transactional outbox patterns to ensure reliable event publishing.

  • Use message brokers like Kafka, RabbitMQ, or Azure Service Bus for event-driven sagas.

  • Centralize monitoring and logging for easier debugging.

  • Choose Choreography for smaller, simpler flows; Orchestration for complex, multi-step business processes.


Conclusion

The SAGA design pattern in microservices is a powerful approach to managing distributed transactions and ensuring data consistency in a scalable, fault-tolerant way. By breaking down large business processes into local transactions and applying compensating actions when failures occur, SAGA helps businesses build resilient, reliable, and consistent microservices applications.

For an e-commerce order workflow, the SAGA pattern ensures that payments are refunded, orders are canceled, and systems remain consistent—even in failure scenarios.

👉 In short: SAGA is the backbone of reliable microservices transaction management.

Sample code for Saga Design pattern implementation

We’ll simulate an E-commerce Order Workflow with:

  • Order Service

  • Payment Service

  • Inventory Service

  • Shipping Service

  • Saga Orchestrator


using System;

using System.Threading.Tasks; #region Services // Order Service public class OrderService { public Task<string> CreateOrderAsync() { Console.WriteLine("✅ Order created successfully."); return Task.FromResult("Order123"); } public Task CancelOrderAsync(string orderId) { Console.WriteLine($"❌ Order {orderId} cancelled."); return Task.CompletedTask; } } // Payment Service public class PaymentService { public Task<bool> ProcessPaymentAsync(string orderId) { Console.WriteLine($"💳 Payment processed for {orderId}."); return Task.FromResult(true); } public Task RefundPaymentAsync(string orderId) { Console.WriteLine($"💸 Payment refunded for {orderId}."); return Task.CompletedTask; } } // Inventory Service public class InventoryService { public Task<bool> ReserveInventoryAsync(string orderId) { Console.WriteLine($"📦 Inventory reserved for {orderId}."); // Simulating failure for demo purpose bool isStockAvailable = false; if (!isStockAvailable) { Console.WriteLine($"⚠️ Inventory reservation failed for {orderId}."); return Task.FromResult(false); } return Task.FromResult(true); } public Task ReleaseInventoryAsync(string orderId) { Console.WriteLine($"🔄 Inventory released for {orderId}."); return Task.CompletedTask; } } // Shipping Service public class ShippingService { public Task<bool> ShipOrderAsync(string orderId) { Console.WriteLine($"🚚 Order {orderId} shipped successfully."); return Task.FromResult(true); } } #endregion #region Orchestrator // SAGA Orchestrator public class SagaOrchestrator { private readonly OrderService _orderService; private readonly PaymentService _paymentService; private readonly InventoryService _inventoryService; private readonly ShippingService _shippingService; public SagaOrchestrator(OrderService orderService, PaymentService paymentService, InventoryService inventoryService, ShippingService shippingService) { _orderService = orderService; _paymentService = paymentService; _inventoryService = inventoryService; _shippingService = shippingService; } public async Task ExecuteOrderWorkflowAsync() { string orderId = await _orderService.CreateOrderAsync(); try { bool paymentResult = await _paymentService.ProcessPaymentAsync(orderId); if (!paymentResult) throw new Exception("Payment Failed"); bool inventoryResult = await _inventoryService.ReserveInventoryAsync(orderId); if (!inventoryResult) throw new Exception("Inventory Reservation Failed"); bool shippingResult = await _shippingService.ShipOrderAsync(orderId); if (!shippingResult) throw new Exception("Shipping Failed"); Console.WriteLine("🎉 SAGA Completed: Order successfully processed."); } catch (Exception ex) { Console.WriteLine($"⚠️ SAGA Compensation triggered due to: {ex.Message}"); // Compensating transactions in reverse order await _inventoryService.ReleaseInventoryAsync(orderId); await _paymentService.RefundPaymentAsync(orderId); await _orderService.CancelOrderAsync(orderId); Console.WriteLine("✅ Compensation completed, system is consistent."); } } } #endregion #region Program public class Program { public static async Task Main(string[] args) { var orchestrator = new SagaOrchestrator( new OrderService(), new PaymentService(), new InventoryService(), new ShippingService() ); await orchestrator.ExecuteOrderWorkflowAsync(); } } #endregion

🔍 Explanation

  • SagaOrchestrator coordinates the entire workflow.

  • Each service executes its local transaction.

  • If any service fails (in this example, Inventory fails), compensating transactions are executed:

    • Release inventory

    • Refund payment

    • Cancel order

This ensures eventual consistency across microservices.


✅ Output (Sample Run)

✅ Order created successfully. 💳 Payment processed for Order123. 📦 Inventory reserved for Order123. ⚠️ Inventory reservation failed for Order123. ⚠️ SAGA Compensation triggered due to: Inventory Reservation Failed 🔄 Inventory released for Order123. 💸 Payment refunded for Order123. ❌ Order Order123 cancelled. ✅ Compensation completed, system is consistent.


DDL, DML, DCL, TCL — Complete SQL Guide

 Short intro / TL;DR:

SQL is organized into command categories that each serve a different purpose: DDL (define schema), DML (manipulate data), DCL (control permissions), and TCL (manage transactions). This article explains each command, shows practical examples, points out vendor differences you should know, and gives sample scripts and best practices you can paste into a blog post.


What each SQL category means (quick summary)

  • DDL — Data Definition Language: Commands to create/modify/drop database objects (tables, indexes, schemas). Examples: CREATE, ALTER, DROP, TRUNCATE, RENAME.

  • DML — Data Manipulation Language: Commands to read and change the data inside objects. Examples: SELECT, INSERT, UPDATE, DELETE, MERGE (upsert).

  • DCL — Data Control Language: Manage access and privileges. Examples: GRANT, REVOKE (and DENY in SQL Server).

  • TCL — Transaction Control Language: Manage atomic units of work. Examples: BEGIN TRAN / START TRANSACTION, COMMIT, ROLLBACK, SAVEPOINT.


DDL (Data Definition Language)

DDL changes the shape of the database — tables, columns, constraints, indexes.

CREATE

Purpose: create tables, indexes, views, schemas.
Syntax (example):

CREATE TABLE Departments (
  DeptID   INT PRIMARY KEY,
  Name     VARCHAR(100) NOT NULL,
  Location VARCHAR(100)
);

When to use: new tables, indexes, views. Use migrations/DDL scripts under source control.

ALTER

Purpose: modify existing objects (add/modify/drop columns, change constraints).
Syntax (examples):

-- add column
ALTER TABLE Employees ADD HireDate DATE;

-- modify column (SQL Server example)
ALTER TABLE Employees ALTER COLUMN Salary DECIMAL(12,2);

-- drop column
ALTER TABLE Employees DROP COLUMN MiddleName;

Tip: Some engines lock the table for ALTER. For large tables prefer online/zero-downtime migration strategies.

DROP

Purpose: remove objects permanently.

DROP TABLE IF EXISTS TempTable;

Caution: DROP deletes schema + data. Always ensure backups or run in migration scripts.

TRUNCATE

Purpose: remove all rows quickly. Typically faster than DELETE because it deallocates pages.

TRUNCATE TABLE AuditLog;

Gotchas: Behavior varies by DBMS:

  • Often resets identity/autoincrement counters.

  • May be minimally logged, so it’s faster.

  • Some RDBMS treat TRUNCATE as DDL with implicit commit (e.g., older MySQL engines) — in other engines (PostgreSQL, SQL Server) it can be transactional. Check your DB’s docs before relying on rollback behavior.

RENAME

Purpose: rename table or object. Syntax varies:

  • PostgreSQL: ALTER TABLE oldname RENAME TO newname;

  • SQL Server: sp_rename 'oldname', 'newname';
    Because syntax differs between DBMSs, prefer ALTER TABLE ... RENAME where supported.


DML (Data Manipulation Language)

DML is what you use day-to-day to read & change rows.

SELECT

Purpose: read data. Key clauses: WHERE, GROUP BY, HAVING, ORDER BY, LIMIT/TOP.
Example:

SELECT DeptID, COUNT(*) AS EmployeeCount
FROM Employees
WHERE Active = 1
GROUP BY DeptID
HAVING COUNT(*) > 5
ORDER BY EmployeeCount DESC;

Tip: avoid SELECT * in production; list only needed columns.

INSERT

Insert single row:

INSERT INTO Employees (EmpID, Name, DeptID, Salary)
VALUES (101, 'Asha', 10, 50000);

Insert multiple rows / insert from select:

INSERT INTO ArchiveEmployees (EmpID, Name, DeptID)
SELECT EmpID, Name, DeptID FROM Employees WHERE Active = 0;

UPDATE

Purpose: change rows that meet a condition.

UPDATE Employees
SET Salary = Salary * 1.05
WHERE Performance = 'A';

Caution: UPDATE without WHERE changes every row.

DELETE

Purpose: remove rows.

DELETE FROM Employees WHERE EmpID = 999;

Note: DELETE logs each row (slower than TRUNCATE), but it can usually be rolled back inside a transaction.

MERGE (UPSERT)

Purpose: insert new rows or update existing rows in a single statement (supported in SQL Server, Oracle; alternatives exist in other DBs).
SQL Server example:

MERGE INTO TargetTable AS T
USING (SELECT EmpID, Salary FROM Staging) AS S
  ON T.EmpID = S.EmpID
WHEN MATCHED THEN
  UPDATE SET Salary = S.Salary
WHEN NOT MATCHED THEN
  INSERT (EmpID, Salary) VALUES (S.EmpID, S.Salary);

Alternatives: PostgreSQL INSERT ... ON CONFLICT, MySQL INSERT ... ON DUPLICATE KEY UPDATE.

Caveat: MERGE in some engines had edge-case bugs historically — test carefully or use engine-native upsert idioms.


DCL (Data Control Language)

Controls who can do what.

GRANT

Grant privileges to a user/role.

GRANT SELECT, INSERT ON Employees TO app_user;

REVOKE

Remove privileges:

REVOKE INSERT ON Employees FROM app_user;

DENY

SQL Server specific: explicitly deny a permission (takes precedence over grant).

DENY DELETE ON Employees TO some_user;

Best practice: use roles/groups (e.g., app_readonly, app_writer) and grant roles, not individual users. Keep least privilege principle.


TCL (Transaction Control Language)

TCL ensures consistency by grouping operations into atomic units.

BEGIN TRAN / START TRANSACTION

Start a transaction.

-- SQL Server
BEGIN TRAN;

-- MySQL / PostgreSQL
START TRANSACTION;

COMMIT

Persist all changes made in the transaction.

COMMIT;

ROLLBACK

Undo changes made in the transaction.

ROLLBACK;

SAVEPOINT

Create a named point to roll back part of a transaction.

SAVEPOINT before_bulk_update;
-- do updates
ROLLBACK TO SAVEPOINT before_bulk_update; -- undo updates only
COMMIT;

Important: keep transactions short to reduce lock contention. Use proper isolation levels to balance consistency and performance.


Other important keywords & clauses (how and when to use them)

WHERE

Filter rows.

SELECT * FROM Orders WHERE OrderDate >= '2025-01-01';

GROUP BY / HAVING

Aggregate rows; HAVING filters groups.

SELECT CustomerID, SUM(Amount) AS Total
FROM Orders
GROUP BY CustomerID
HAVING SUM(Amount) > 1000;

ORDER BY

Sort results.

SELECT * FROM Products ORDER BY CreatedAt DESC;

DISTINCT

Remove duplicates.

SELECT DISTINCT Country FROM Customers;

TOP / LIMIT

Return only N rows:

  • SQL Server: SELECT TOP 10 * FROM ...;

  • MySQL/Postgres: SELECT * FROM ... LIMIT 10;

JOINs (INNER, LEFT, RIGHT, FULL)

Combine rows from two tables.

Inner join (intersection):

SELECT e.Name, d.Name AS DeptName
FROM Employees e
INNER JOIN Departments d ON e.DeptID = d.DeptID;

Left join (all left rows + matching right):

SELECT e.Name, d.Name AS DeptName
FROM Employees e
LEFT JOIN Departments d ON e.DeptID = d.DeptID;

Right / Full join: symmetrical: RIGHT JOIN returns all right rows; FULL JOIN returns rows present in either table (some DBs may not support FULL JOIN).

UNION / EXCEPT / INTERSECT

Combine result sets (remove duplicates, unless UNION ALL).

SELECT Name FROM Customers
UNION
SELECT Name FROM Employees;
  • INTERSECT: rows common to both queries.

  • EXCEPT (or MINUS in Oracle): rows in first query not in second.

WITH (CTE — Common Table Expression)

Make query logic modular and readable:

WITH TopSellers AS (
  SELECT ProductID, SUM(Quantity) AS Qty
  FROM OrderItems
  GROUP BY ProductID
)
SELECT p.Name, t.Qty
FROM TopSellers t
JOIN Products p ON p.ProductID = t.ProductID
ORDER BY t.Qty DESC;

OVER, PARTITION BY (Window functions)

Compute aggregates over partitions without collapsing rows:

SELECT
  EmpID,
  DeptID,
  Salary,
  ROW_NUMBER() OVER (PARTITION BY DeptID ORDER BY Salary DESC) AS RankInDept,
  SUM(Salary) OVER (PARTITION BY DeptID) AS DeptTotalSalary
FROM Employees;

Window functions are powerful for "top N per group", running totals, moving averages, etc.


Practical examples — small end-to-end script

-- DDL: create tables
CREATE TABLE Departments (
  DeptID INT PRIMARY KEY,
  Name VARCHAR(100)
);

CREATE TABLE Employees (
  EmpID INT PRIMARY KEY,
  Name VARCHAR(100),
  DeptID INT REFERENCES Departments(DeptID),
  Salary DECIMAL(12,2),
  Active BIT DEFAULT 1
);

-- DML: insert sample data
INSERT INTO Departments (DeptID, Name) VALUES (10, 'Engineering'), (20, 'HR');
INSERT INTO Employees (EmpID, Name, DeptID, Salary) VALUES
(1, 'Ravi', 10, 80000), (2, 'Leela', 10, 75000), (3, 'Sonal', 20, 60000);

-- TCL: transaction + savepoint
BEGIN TRAN;
UPDATE Employees SET Salary = Salary * 1.05 WHERE DeptID = 10;
SAVEPOINT sp1;
DELETE FROM Employees WHERE EmpID = 3; -- accidental?
ROLLBACK TO SAVEPOINT sp1; -- undo delete, keep salary update
COMMIT;

-- DML: select using window function
SELECT EmpID, Name, DeptID, Salary,
  ROW_NUMBER() OVER (PARTITION BY DeptID ORDER BY Salary DESC) AS RankInDept
FROM Employees;

Best practices & real-world tips

  • Use migrations (scripts + version control) for DDL — don’t manually change production schema. Tools: Flyway, Liquibase, EF Migrations, etc.

  • Back up before destructive DDL (DROP, big ALTER).

  • Keep transactions short. Long-running transactions block other work and can cause locks.

  • Use parameterized queries to avoid SQL injection.

  • Index columns used in JOIN, WHERE, and ORDER BY; but don’t over-index (write penalty). Use EXPLAIN to inspect query plans.

  • Avoid SELECT *. Explicit columns are clearer and cheaper.

  • Be mindful of DBMS differences. TRUNCATE, MERGE, RENAME, privilege syntax and transaction semantics can differ. Test on your engine.

  • Use roles for permissions and avoid granting GRANT ALL to application users.


Common gotchas & vendor differences (short list)

  • TRUNCATE behavior and rollback semantics vary across engines — check documentation.

  • MERGE is supported differently and had issues historically in some engines — consider engine-native upsert patterns (e.g., INSERT ... ON CONFLICT in PostgreSQL).

  • DENY exists in SQL Server, not in MySQL/Postgres.

  • TOP vs LIMIT vs FETCH FIRST — syntax differs across DBMS.



Don't Copy

Protected by Copyscape Online Plagiarism Checker

Pages