• Home
  • Quality Engineering
  • Gen AI in QA
  • More
    • Home
    • Quality Engineering
    • Gen AI in QA
  • Home
  • Quality Engineering
  • Gen AI in QA
Glasnost Systems

Testing Quick Notes

Different Types of Testing

1. Functional Testing

Focuses on verifying that the software functions according to requirements.

  • Unit Testing: Tests individual units or components of code.
     
  • Integration Testing: Tests interactions between integrated modules.
     
  • System Testing: Tests the complete application as a whole.
     
  • Sanity Testing: Verifies specific functionality after minor changes.
     
  • Smoke Testing: Basic check to ensure core functionality works.
     
  • Regression Testing: Ensures new code changes don’t break existing features.
     
  • User Acceptance Testing (UAT): Validates the system meets business requirements.
     

🔹 2. Non-Functional Testing

Evaluates how the system performs under various conditions.

  • Performance Testing: Measures speed, responsiveness, and stability.
     
  • Load Testing: Simulates expected user traffic.
     
  • Stress Testing: Puts the system under extreme conditions.
     
  • Soak (Endurance) Testing: Tests under load over an extended period.
     
  • Spike Testing: Tests response to sudden surges in traffic.
     
  • Scalability Testing: Assesses the system’s ability to scale up/down.
     
  • Security Testing: Identifies vulnerabilities and ensures data protection.
     
  • Usability Testing: Evaluates user-friendliness and ease of use.
     
  • Compatibility Testing: Verifies behavior across browsers, devices, OSs.
     

🔹 3. Maintenance Testing

Checks behavior after software has been modified.

  • Confirmation Testing: Re-tests a fixed defect to confirm the issue is resolved.
     
  • Regression Testing: Re-validates unchanged parts after updates.
     

🔹 4. Specialized Testing

Targeted for specific technologies or conditions.

  • API Testing: Verifies back-end services and integrations.
     
  • Cloud Testing: Tests scalability, security, and availability in cloud environments.
     
  • Mobile Testing: Ensures functionality and performance on mobile devices.
     
  • Accessibility Testing: Ensures the application is usable by people with disabilities.
     
  • Localization Testing: Validates language, format, and cultural aspects for different regions.

Cloud Testing Types and why?

  

Testing for cloud scalabilityensures your application can handle increasing (or decreasing) workloads efficiently in a cloud environment. It focuses on performance, resource utilization, and elasticity. Here are the key types of testing needed for cloud scalability:

  

🔹 1. Load Testing

  • Purpose: Simulate expected user traffic      to check how the system handles normal and peak loads.
  • Why it's important: Verifies auto-scaling works      correctly when user demand increases.

  

🔹 2. Stress Testing

  • Purpose: Push the system beyond its      capacity to see how it fails and recovers.
  • Why it's important: Identifies the application's      breaking point and how it behaves under extreme load.

  

🔹 3. Soak (Endurance) Testing

  • Purpose: Run the system under a      significant load for an extended period.
  • Why it's important: Ensures the system doesn’t      degrade over time (e.g., memory leaks, resource starvation).

  

🔹 4. Spike Testing

  • Purpose: Test sudden increases or      decreases in load.
  • Why it's important: Ensures the system can quickly      adapt to abrupt traffic surges, like flash sales or viral events.

  

🔹 5. Auto-Scaling Testing

  • Purpose: Test cloud infrastructure’s      ability to automatically scale resources up/down based on demand.
  • Why it's important: Verifies that scaling policies      (e.g., CPU, memory thresholds) are correctly configured.

  

🔹 6. Failover and Recovery Testing

  • Purpose: Simulate failures to test how      the application handles node or region outages.
  • Why it's important: Ensures high availability and      disaster recovery in cloud environments.

  

🔹 7. Capacity Testing

  • Purpose: Identify the maximum number of      users or workload the system can handle while maintaining acceptable      performance.
  • Why it's important: Helps define scaling limits and      plan infrastructure capacity.

  

🔹 8. Elasticity Testing

  • Purpose: Evaluate the system’s ability      to efficiently scale out and scale in as demand changes.
  • Why it's important: Measures cost efficiency and      responsiveness to dynamic workloads.

  

✅ Tools Commonly Used:

  • JMeter, Gatling, Locust –      for load/stress testing.
  • AWS CloudWatch, Azure Monitor, Google      Cloud Operations Suite – for auto-scaling and cloud resource metrics.
  • Chaos Monkey (Netflix OSS) – for fault injection and      failover tests.

What is Localization testing and how to perform it?

Localization Testing is a type of software testing that verifies whether an application behaves correctly and is appropriately adapted for a specific locale, culture, or region. It ensures that the product is linguistically accurate, culturally relevant, and functionally suitable in the target market.

✅ What Does Localization Testing Cover?

  1. Language Translation
     
    • UI text, labels, messages, and error logs are correctly translated.
       
    • No hardcoded strings in the source language.
       

  1. Cultural Appropriateness
     
    • Images, icons, colors, and symbols are suitable and respectful of the target culture.
       

  1. Date/Time Formats
     
    • Follows local formatting (e.g., MM/DD/YYYY in US vs DD/MM/YYYY in UK).
       

  1. Currency & Number Formats
     
    • Correct currency symbol, number separators, and decimal marks.
       

  1. Address Formats
     
    • Matches country-specific structure and fields (e.g., ZIP code vs postal code).
       

  1. Sorting & Text Direction
     
    • Alphabetical sorting rules and support for right-to-left (RTL) languages like Arabic or Hebrew.
       

  1. Legal & Regulatory Compliance
     
    • Local terms of service, privacy policies, disclaimers, etc.
       

🛠️ How to Perform Localization Testing

1. Set Up Localized Environment

  • Change system settings or locale in the application (language, region, timezone).
     
  • Use emulators or real devices to test different regions.
     

2. Test Language Accuracy

  • Use professional translators or native speakers to validate translations.
     
  • Check for grammar, spelling, context, and tone.
     

3. Check UI Layout & Rendering

  • Make sure text expansion (e.g., German) doesn't break the layout.
     
  • Ensure special characters or right-to-left scripts are rendered correctly.
     

4. Verify Functional Behavior

  • All buttons, links, forms, and workflows work as expected in the localized version.
     

5. Test Regional Settings

  • Validate time zones, calendar systems, number formats, etc.
     

6. Automated & Manual Testing

  • Use automated tools for repeated checks (e.g., screenshot comparison tools).
     
  • Perform manual testing for language, tone, and UI nuances.
     

🔍 Tools Commonly Used

  • Globalyzer – Checks source code for internationalization issues.
     
  • Pseudo-localization – Simulates translated content to detect layout/UI issues early.
     
  • BrowserStack, Sauce Labs – For cross-browser and cross-region testing.
     
  • Crowdin, Lokalise – For managing localization workflows and files.


How to Automate Localization Testing? 



Automating localization testing can greatly improve test coverage and reduce the time needed to verify multiple languages and locales. However, because it involves nuanced visual and linguistic elements, a hybrid approach (automation + manual validation) is often ideal.


Here’s a step-by-step guide on how to automate localization testing effectively:


✅ 1. Externalize All Text


Ensure all UI strings are stored in external resource files (e.g., .properties, .resx, .json, or .po files) rather than hardcoded.

  • Use consistent keys across all locales.
     
  • Example:
    jsonCopyEdit{ "welcome_message": "Welcome", "submit_button": "Submit" }
     

✅ 2. Use Pseudo-Localization Early

Simulate localization by replacing characters with accented or extended versions to catch layout issues before real translations are ready.

  • Example: "Welcome" → "[!!! Ŵëļçõɱè !!!]"
     

Tools:

  • PseudoLocalizer, built-in tools in Android Studio or Xcode
     
  • Custom scripts for web applications
     

✅ 3. Automate UI Validation with Screenshot Comparison

Use automation to capture and compare screenshots across different locales.

Tools:

  • Selenium/Appium + Percy, Applitools, or Playwright – for visual testing
     
  • SikuliX – image-based comparison
     
  • TestCafe – browser automation with built-in localization support
     

Steps:

  1. Launch app in different locales (e.g., en-US, fr-FR, de-DE).
     
  2. Capture screenshots of key screens.
     
  3. Compare against baseline images using tools like Applitools or Percy.
     

✅ 4. Automate Functional Validation in Different Locales

Using Selenium, Appium, or Playwright:

Example (Selenium - Java):

javaCopyEditLocale locale = new Locale("fr", "FR");
ResourceBundle messages = ResourceBundle.getBundle("Messages", locale);
assertEquals(driver.findElement(By.id("welcome")).getText(), messages.getString("welcome_message"));

✅ 5. Validate Resource Files Automatically

Write scripts to:

  • Ensure all keys exist in every language.
     
  • Check for missing or unused translations.
     
  • Detect formatting mismatches (e.g., placeholders not aligned: %s, {0}, etc.).
     

Tools:

  • Lokalise CLI, Crowdin CLI
     
  • Custom Python or JavaScript scripts
     

✅ 6. Integrate into CI/CD

Run localization tests as part of your CI/CD pipeline:

  • Run visual/UI checks on every build.
     
  • Validate resource file integrity on commit.
     

✅ 7. Use AI-Powered Localization Testing Tools

These tools can detect translation and UI issues more intelligently:

  • Globalyzer – for static code scanning
     
  • Applitools – AI-powered visual testing
     
  • Lokalise QA – for automated localization QA
     

🔄 Best Practices

  • Prioritize high-traffic or high-visibility screens for automation.
     
  • Combine automation with manual review by native speakers for tone and cultural relevance.
     
  • Maintain a baseline set of screenshots for each language.
     
  • Monitor changes in UI layout as translations update.


Performance Testing details

Performance testing evaluates how a system behaves under various conditions, ensuring it is fast, stable, scalable, and reliable. It helps identify performance bottlenecks before an application goes live.


✅ Types of Performance Testing


  Type Purpose Example Use Case     

1. Load Testing Test system under expected user load. Simulate 10,000 users accessing a shopping site.  

 2. Stress Testing Push system beyond its limits to see how it fails and recovers. Check how the app behaves during a traffic spike.   

3. Spike Testing Apply sudden sharp load increases and decreases. Simulate flash sales or breaking news traffic.   

4. Endurance (Soak) Testing Test system under continuous load for an extended period. Run the app for 8 hours to detect memory leaks.  

5. Scalability Testing Measure system’s ability to scale up or down with load. Test auto-scaling behavior in cloud infrastructure.   

6. Volume Testing (Flood Testing) Test the system’s ability to handle a large volume of data. Insert 1 million records into a database and query performance.   

7. Configuration Testing Test performance under different system configurations. Check app behavior with different OS or network settings.     


🛠️ Popular Performance Testing Tools


  Tool Description Best For     

Apache JMeter Open-source tool for load testing web apps and services. Load, stress, and endurance testing.   

LoadRunner (Micro Focus) Enterprise-grade tool with powerful analytics. Complex enterprise systems.   

Gatling Developer-friendly tool based on Scala; good CI/CD integration. High-performance HTTP-based testing.   

Locust Python-based, user-friendly and scalable. Distributed load testing via code.   

k6 Modern, scriptable load testing tool. CLI-based, supports CI. Testing APIs and microservices.   

Artillery Lightweight CLI tool for HTTP and WebSocket performance testing. Node.js-based performance scripts.   

BlazeMeter Cloud-based platform compatible with JMeter scripts. Scalable cloud performance testing.   

NeoLoad Designed for DevOps and CI/CD. Testing modern web and mobile apps.   

AWS CloudWatch + CloudTest Monitors and tests performance in AWS infrastructure. Cloud-native applications.     


📈 Metrics to Track in Performance Testing

  • Response Time
     
  • Throughput (requests/sec)
     
  • Error Rate
     
  • CPU & Memory Usage
     
  • Network Latency
     
  • Concurrency/Active Users
     


Contract Testing

Contract testing is a technique used primarily in microservices and distributed systems to ensure that services (producers) and clients (consumers) can communicate and interact with each other as expected — without relying on full end-to-end integration tests.

It acts as a mutual agreement or "contract" between a service and its consumers, specifying the expected input/output. The goal is to catch integration issues early and independently in CI pipelines.


✅ What is Contract Testing?

In simple terms:

A contract is a shared understanding of how two systems (like an API provider and a client) should interact.
 

Instead of testing the entire system together, contract testing ensures that:

  • The provider honors the agreed-upon contract.
     
  • The consumer expects only what’s in the contract.
     

🧩 When is Contract Testing Used?

  • Microservices architectures
     
  • APIs (REST, GraphQL, gRPC)
     
  • Consumer-driven development
     
  • CI/CD pipelines with independent deployments
     

🔄 Types of Contract Testing

  Type Description     Consumer-Driven Contract Testing The consumer defines what it expects; the provider tests against these expectations.   Provider-Driven Contract Testing The provider defines the contract, and consumers verify if they can consume it.   Bidirectional Contract Testing Both provider and consumer define expectations, and both are tested against each other.     


🛠️ How Contract Testing Works

  1. Consumer Writes the Contract
     
    • Defines expected requests and responses (e.g., JSON schema).
       
    • Example: "GET /users/123 should return a 200 with a user object."
       

  1. Contract is Shared with Provider
     
    • Usually via a contract broker like Pactflow or a shared repo.
       

  1. Provider Verifies the Contract
     
    • Runs tests to ensure its API implementation satisfies the contract.
       

  1. CI Integration
     
    • On every commit or build, both consumer and provider test independently.
       
    • Failing contracts = integration failure.
       

🛠️ Popular Tools for Contract Testing

  Tool Language Description     


Pact Multiple (Java, JS, .NET, Go) Most popular consumer-driven contract testing tool.   

Spring Cloud Contract Java Contract-first testing for Spring-based services.   

Postman/Newman JS Can validate API responses against schema definitions.  

Hoverfly Go Proxy-based API simulation and verification.   

Pacto / Dredd Ruby / Node.js Contract testing for API Blueprint and OpenAPI (Swagger).   

Pactflow SaaS Hosted contract broker with GitOps and CI/CD integration.     


🚀 How Modern Companies Use Contract Testing Effectively


1. Shift-Left Testing in CI/CD

  • Contracts are verified during development, not just staging.
     
  • CI jobs for consumer verification and provider verification.
     

2. Contract as Code


  • Contracts are versioned alongside code in Git.
     
  • Developers can test changes locally before merging.
     

3. Use of Contract Brokers

  • Tools like Pact Broker or Pactflow manage contracts between multiple teams and services.
     
  • Provides versioning, sharing, tagging (prod, test, main, etc.).
     

4. Integrating with Microservice Pipelines

  • Each microservice validates contracts independently.
     
  • Services can be deployed autonomously if they pass contract tests.
     

5. Avoiding Over-Reliance on E2E Tests

  • Companies like Netflix, Spotify, and ThoughtWorks use contract testing to minimize brittle end-to-end tests and speed up delivery.
     

📘 Example (Pact JS):

const { Pact } = require('@pact-foundation/pact');

const provider = new Pact({
 consumer: 'UserApp',
 provider: 'UserService',
});

describe('User Service Contract', () => {
 it('should return a user by ID', async () => {
   await provider.addInteraction({
     state: 'user 123 exists',
     uponReceiving: 'a request for user 123',
     withRequest: {
       method: 'GET',
       path: '/users/123',
     },
     willRespondWith: {
       status: 200,
       body: { id: 123, name: 'Alice' },
     },
   });

   // Run consumer code and verify interaction
 });
});

✅ Benefits of Contract Testing

  • Early detection of integration issues
     
  • Faster feedback loops
     
  • Independent deployment of services
     
  • Reduced reliance on slow end-to-end tests
     
  • Better collaboration between teams
     

📖 Further Reading:

  • Pact Official Site
     
  • Spring Cloud Contract
     
  • Pactflow Blog
     

Shift Left in QA and details to know

Shift Left in QA testing refers to the practice of moving quality assurance activities earlier in the software development lifecycle (SDLC), especially during the design and development phases. The idea is to identify defects and issues early, rather than waiting until later stages (e.g., after the code is developed or in production). This approach helps in reducing the cost of fixing bugs and improves the overall quality and speed of delivery.


Key Principles of Shift Left in QA:

  1. Early Detection of Defects: QA teams start testing much earlier than traditional practices, such as during requirement gathering, design, and development phases.
     
  2. Test-Driven Development (TDD): Writing tests before writing the actual code, ensuring that every line of code is validated by automated tests.
     
  3. Behavior-Driven Development (BDD): Writing tests that describe the expected behavior of an application in plain language (often using tools like Cucumber), making it easier to understand for both technical and non-technical stakeholders.
     
  4. Continuous Testing: Test activities are integrated into the CI/CD pipeline, meaning automated tests are continuously run on every code change.
     
  5. Collaboration: Development, QA, and operations teams collaborate closely to ensure quality is embedded throughout the process.
     

🔍 How Does Shift Left Impact the QA Process?

  1. Test Early and Often
     
    • Test cases are written as early as possible—even before code is written.
       
    • Automated tests are run frequently in the CI/CD pipeline to identify defects quickly.
       

  1. Incorporation of Test Automation
     
    • Automated tests (unit, integration, UI, and performance) are integrated into CI/CD processes so that every code change is tested instantly.
       
    • Tools like Jenkins, GitLab CI, CircleCI, and Travis CI are used to continuously run automated tests.
       

  1. Collaboration Between Teams
     
    • DevOps and QA work together, ensuring smooth collaboration and faster delivery.
       
    • QA teams are involved in requirements gathering, design, and development phases.
       

  1. Early Performance and Security Testing
     
    • Performance testing (e.g., load, stress) and security testing (e.g., static analysis) start early in the SDLC, instead of waiting for later phases like staging or production.
       

  1. Shift Left Testing Tools
     
    • Tools like SonarQube (for static code analysis), Selenium, Cypress, and JUnit (for unit and integration tests) are used to integrate quality checks as part of the development process.
       
    • Mocking and Stubbing: Tools like WireMock are used for creating mock environments to test services and APIs early.
       

📈 Benefits of Shift Left in QA Testing

  1. Faster Time to Market:
     
    • Since defects are found early, developers can fix them sooner, reducing delays caused by defects identified later in the process.
       

  1. Lower Cost of Fixing Bugs:
     
    • The cost of fixing defects increases exponentially the later they are found in the process. Early testing helps identify issues when they're cheaper to fix.
       

  1. Improved Collaboration:
     
    • Development, operations, and QA teams work closely throughout the SDLC, fostering a culture of quality.
       

  1. Higher Quality Software:
     
    • Continuous testing and feedback loops help ensure that the software meets quality standards throughout the development process.
       

  1. Reduced Risks:
     
    • Earlier identification of security, performance, or functional issues reduces the risk of major issues surfacing later in production.
       

🔄 How Modern Companies Adopt Shift Left Testing

  1. DevOps and CI/CD Integration:
    Modern companies like Netflix, Amazon, and Spotify integrate testing early in their CI/CD pipeline, where every commit triggers automated tests, including unit, integration, and UI tests.
     
  2. Test Automation at the Core:
    Many companies automate everything—unit tests, functional tests, integration tests, and performance tests. These tests run continuously, catching bugs early in the process.
     
  3. Behavior-Driven Development (BDD):
    Companies like Google and Shopify use BDD to involve both developers and business stakeholders in writing tests. This helps shift testing to earlier stages, ensuring the product meets business needs from the beginning.
     
  4. Shift Left in Security (DevSecOps):
    In addition to functional testing, many modern companies practice DevSecOps, integrating security tests early in the development pipeline. Tools like OWASP ZAP, SonarQube, and Checkmarx automatically scan for vulnerabilities during the development cycle.
     
  5. Performance Testing Early:
    Companies like LinkedIn and Twitter use performance testing during the development phase to ensure their systems scale efficiently. They test for load, scalability, and stress early to ensure the application can handle traffic spikes.
     

⚙️ Tools and Practices for Shift Left Testing

  • Test Automation:
     
    • Selenium, Appium, JUnit, Cypress, TestCafe (for UI and functional tests)
       
    • Jenkins, CircleCI, GitLab CI (for integrating tests into CI/CD pipelines)
       
  • TDD and BDD:
     
    • Cucumber (for BDD)
       
    • JUnit and Mockito (for TDD)
       
  • Static Code Analysis:
     
    • SonarQube, CodeClimate (to catch issues in code quality early)
       
  • Performance Testing:
     
    • JMeter, Gatling, k6 (integrating performance tests early)
       
  • Security Testing:
     
    • OWASP ZAP, Snyk, Checkmarx (early vulnerability scans)
       

📚 Learn More

  • Shift Left Testing Explained - Sauce Labs
     
  • How to Implement Shift Left Testing in Your DevOps Pipeline - TechBeacon
     

Would you like to dive deeper into any specific tools or practices related to Shift Left in QA?

Copyright © 2025 Glasnost Systems - All Rights Reserved.

Powered by

  • Quality Engineering

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept