Thursday, October 16, 2025

What are Environment Variables in Microservices — A Complete Guide

 Introduction

Environment variables are one of the simplest and most common ways to pass configuration and runtime settings into applications. In microservices architectures — where you run many small services independently (often in containers) — environment variables let you decouple configuration from code so the same build can run in dev, staging, and production with different behavior.

This article explains what environment variables are, how they’re used inside microservices, and concrete examples (Docker, Kubernetes, .NET). It also covers security, best practices, and troubleshooting.


What is an environment variable?

An environment variable is a named value provided by the operating environment (OS, container runtime, orchestrator) that an application can read at runtime. Examples:

  • DATABASE_URL=postgres://user:pass@db:5432/mydb

  • ASPNETCORE_ENVIRONMENT=Production

  • API_KEY=xyz

Key idea: configuration via environment variables means code doesn’t need to change across deployments — only the environment/table of variables changes.


Why microservices use environment variables

  1. Separation of config and code — same build artifact, different environment settings.

  2. 12-Factor app compliance — environment variables are one of the 12-factor recommendations for config.

  3. Container friendliness — Docker, Kubernetes and serverless platforms natively support env vars.

  4. Simplicity — easy to set and read from any language/runtime.

  5. Integration with orchestration — k8s ConfigMap/Secret, cloud config services map nicely to env vars.


Types of configuration you usually store in env vars

  • Connection strings and endpoints (DB_HOST, REDIS_URL)

  • Feature flags and mode (FEATURE_X_ENABLED=true, ENV=staging)

  • API keys and short-lived tokens (preferably via secrets manager)

  • Service-specific settings (MAX_WORKERS=5, LOG_LEVEL=info)

Note: For long-term secrets, prefer a secret manager (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) or k8s Secrets — see security section.


How to provide environment variables to microservices

1. Docker (local / containers)

  • docker run -e NAME=value image

  • docker run --env-file .env image

  • Dockerfile ENV instruction (bakes into image; generally avoid storing secrets in image)

Example docker run:

docker run -e ASPNETCORE_ENVIRONMENT=Production \ -e ConnectionStrings__Default="Server=db;Database=app;User Id=sa;Password=secret;" \ myservice:latest

2. Docker Compose

docker-compose.yml:

services: api: image: myservice:latest env_file: - .env environment: - LOG_LEVEL=info

.env file:

DB_HOST=db DB_PORT=5432

3. Kubernetes (ConfigMap and Secret)

  • ConfigMap for non-sensitive config.

  • Secret for sensitive data (note: k8s Secrets are base64 encoded by default; enable encryption at rest).

Example Deployment using env from ConfigMap and Secret:

envFrom: - configMapRef: name: my-config - secretRef: name: my-secret

Or explicit env mapping:

env: - name: DB_HOST valueFrom: configMapKeyRef: name: my-config key: db_host - name: DB_PASSWORD valueFrom: secretKeyRef: name: my-secret key: db_password

4. Cloud platforms

  • Azure App Service / AWS ECS / GCP Cloud Run: allow setting app settings / environment variables in the platform UI or IaC (ARM, CloudFormation, Terraform).

  • Use cloud secret integrations to inject secrets as env vars or mounted files.

5. CI/CD pipelines

Inject environment variables during builds or deploys (GitHub Actions env, Azure Pipelines variables, GitLab CI variables), but avoid putting secrets in plain logs.


How to read environment variables inside microservices

.NET (ASP.NET Core / .NET 6+)

ASP.NET Core integrates environment variables into IConfiguration automatically when using the default WebHost/Host builder. Example minimal API:

var builder = WebApplication.CreateBuilder(args); // Configuration picks up appsettings.json and environment variables by default var configuration = builder.Configuration; var conn = configuration["ConnectionStrings:Default"]; // reads CONNECTIONSTRINGS__DEFAULT env var var app = builder.Build(); app.MapGet("/", () => $"DB={conn}"); app.Run();

Directly via Environment:

var dbHost = Environment.GetEnvironmentVariable("DB_HOST");

Important: .NET configuration supports double-underscore mapping to nested keys — ConnectionStrings__Default -> ConnectionStrings:Default.

Node.js

const port = process.env.PORT || 3000;

Java (Spring Boot)

Spring Boot reads env vars automatically into configuration properties — or use @Value("${DB_HOST}").


Examples — end-to-end

Example: Containerized .NET microservice using Docker + env vars

  1. Build image:

FROM mcr.microsoft.com/dotnet/aspnet:8.0 WORKDIR /app COPY ./publish . ENV ASPNETCORE_URLS=http://+:80 ENTRYPOINT ["dotnet", "MyService.dll"]
  1. Run with env:

docker run -e ConnectionStrings__Default="Server=db;..." -e LOG_LEVEL=debug myservice:latest
  1. In code the config is available via builder.Configuration["ConnectionStrings:Default"].

Example: Kubernetes config + secret usage

  • kubectl create configmap my-config --from-literal=LOG_LEVEL=info

  • kubectl create secret generic my-secret --from-literal=DB_PASSWORD=supersecret

  • Deployment uses envFrom as shown earlier.


Best practices and patterns

Follow the 12-factor app pattern

  • Store config in the environment; do not hard-code environment-specific settings.

Prefer platform secret stores for sensitive data

  • Use HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or k8s providers (external secrets) to deliver secrets safely.

  • Inject secrets at runtime — either as env vars or mounted files.

Use distinct variables for environment and secrets

  • ASPNETCORE_ENVIRONMENT=Development|Staging|Production

  • DB_CONNECTIONSTRING, not DB_USERPASS in code.

Don’t log secrets

  • Ensure logging/configuration doesn’t print raw env vars.

Use typed configuration and validation

  • In .NET, bind configuration to strongly-typed options and validate on startup (IOptions with Data Annotations or custom validation). Fail fast if required config missing.

Minimize env var surface

  • Only expose what service needs. Keep variable names consistent across services.

Namespacing and conventions

  • Prefix variables per service or team: PAYMENTS_DB_HOST vs ORDERS_DB_HOST.

Rotation and revocation

  • Plan for secret rotation; short-lived tokens are safer than long-lived credentials.

Use files for very large secrets

  • Some platforms mount secrets as files (e.g., Docker secrets, k8s secrets volume). Reading from files may be more secure for large certs.


Security considerations & caveats

  • Env vars are visible to the process and can be leaked via process dumps or certain debugging tools. They also appear to any user who can inspect the process environment (on some systems).

  • Kubernetes Secrets are base64-encoded — enable encryption at rest or use an external secrets manager for production.

  • Do not store secrets in source control including Dockerfile ENV instructions containing passwords.

  • Least privilege: container/pod/service account should have minimal permissions to retrieve secrets.

  • Audit and monitor access to secret stores.


Troubleshooting tips

  • Confirm env var exists: printenv inside container or kubectl exec -it pod -- printenv.

  • Check precedence: in many stacks, command-line args > env vars > config files. Know your framework’s precedence rules.

  • For .NET: check for __ vs : mapping (ConnectionStrings__Default).

  • Avoid trailing spaces/newlines in values from secrets — they can break connection strings.


Quick checklist before production rollout

  • All required variables documented and validated at startup.

  • Secrets delivered via secure secret manager; not baked into images.

  • Access to secrets restricted and audited.

  • CI/CD injects env vars securely (pipeline secrets).

  • Health checks and log redaction in place.

  • Config matches the environment (ASPNETCORE_ENVIRONMENT etc).


Summary

Environment variables are a simple, platform-friendly way to configure microservices without changing code. They work exceptionally well in containerized and orchestrated environments (Docker, Kubernetes), but handling secrets requires care: use managed secret stores, follow the 12-factor approach, and validate configuration at startup. For .NET developers, IConfiguration + environment providers and Environment.GetEnvironmentVariable are the standard ways to access variables; remember the double-underscore convention for nested keys.

๐Ÿง  What is LLM (Large Language Model) and How It Works? | Complete Guide for .NET Developers

 ๐Ÿ’ก Introduction

In recent years, Artificial Intelligence (AI) has transformed the way we interact with computers. Among all AI innovations, LLMs (Large Language Models) have gained massive attention because they can understand, generate, and reason with human-like language.

Tools like ChatGPT, Google Gemini, and Anthropic Claude are all powered by LLMs. But what exactly is an LLM, how does it work, and how can you use it in your .NET development projects? Let’s break it down.


๐Ÿค– What is an LLM (Large Language Model)?

An LLM (Large Language Model) is a type of AI model trained on massive amounts of text data — such as books, articles, code, and websites — to understand and generate human-like text.

In simple words:

๐Ÿ—ฃ️ An LLM is like a super-smart chatbot that has read the entire internet and can write, summarize, translate, and even write code for you.

Examples of popular LLMs include:

  • OpenAI GPT-4 / GPT-3.5 (used in ChatGPT)

  • Google Gemini (Bard)

  • Anthropic Claude

  • Meta LLaMA 3

  • Mistral AI


⚙️ How Does an LLM Work?

LLMs are based on a deep learning architecture called the Transformer model. Here’s a step-by-step view of how it works:

๐Ÿงฉ 1. Training with Huge Data

The LLM is trained using terabytes of text data. It learns patterns, grammar, facts, and even logic by predicting the next word in a sentence.

Example:
If the sentence is — “C# is a programming ____”
the model learns that the next word is likely “language.”

๐Ÿงฎ 2. Understanding Context

Using a mechanism called self-attention, the model can understand context — meaning it knows what each word in a sentence relates to, even across long paragraphs.

๐Ÿง  3. Generating Human-Like Responses

Once trained, the model can generate text, code, summaries, and more — just like a human — when you give it a prompt (your input).

๐Ÿ—‚️ 4. Fine-tuning and APIs

Companies fine-tune base LLMs for specific purposes — such as customer support, coding assistants, or content creation — and then provide access via APIs.


๐Ÿงฐ Real-World Examples of LLMs

LLM NameDeveloperUse Case
ChatGPT (GPT-4)OpenAIChat, writing, coding
GeminiGoogleSearch and productivity
Claude 3AnthropicDocument understanding
LLaMA 3MetaOpen-source AI research
Cohere Command RCohere AIEnterprise chatbots

๐Ÿ’ป How to Use LLMs in .NET Development

You can integrate LLMs like OpenAI GPT-4 or Azure OpenAI directly into your .NET Core applications.
Here’s a simple example using OpenAI’s API.

๐Ÿงฑ Step 1: Install Required Package

In your .NET project, install the OpenAI package via NuGet:

dotnet add package OpenAI

๐Ÿงพ Step 2: Set Up API Key

You’ll need an API key from OpenAI or Azure OpenAI.

Store your API key securely in appsettings.json:

{ "OpenAI": { "ApiKey": "your-api-key-here" } }

⚙️ Step 3: Use in .NET Code

using OpenAI; using OpenAI.Chat; using System; using System.Threading.Tasks; class Program { static async Task Main() { var api = new OpenAIClient("your-api-key-here"); var chat = api.ChatEndpoint; var response = await chat.GetCompletionAsync("Write a motivational quote about coding in C#"); Console.WriteLine(response.FirstChoice.Message.Content); } }

๐Ÿงฉ Output:

"Code is like poetry — every line should have purpose and beauty."

๐Ÿง  Advanced Integration Ideas

Here are some ideas to use LLMs in your .NET projects:

  1. ๐Ÿ—ฃ️ Chatbots for customer service or internal queries

  2. ๐Ÿ“„ Text summarization tools for reports and emails

  3. ๐Ÿ’ฌ Code assistant to generate or review C# code

  4. ๐Ÿงพ Document understanding (PDFs, invoices, resumes)

  5. ๐Ÿ” Semantic search to improve knowledge base systems


๐Ÿ”’ Using LLMs Securely

When using LLMs in enterprise applications:

  • Don’t send sensitive or personal data to public APIs.

  • Use Azure OpenAI Service for secure enterprise usage.

  • Cache responses to reduce API costs.

  • Monitor and validate model outputs.


๐Ÿš€ Conclusion

LLMs are the core of Generative AI — they can understand, reason, and create text like a human. By integrating them into your .NET applications, you can build intelligent chatbots, automation tools, and productivity apps.

As a .NET developer, learning how to use APIs like OpenAI or Azure OpenAI will open new doors for AI-driven applications in the modern era.

Don't Copy

Protected by Copyscape Online Plagiarism Checker

Pages