Posts

Showing posts from 2024

False Negative

 A false negative in software testing occurs when a test fails to detect a defect or issue in the software, meaning the test result shows that everything is fine when there actually is a problem. This means a real issue goes unnoticed. Example: Imagine you have a test that checks if a login feature works correctly. The test is supposed to verify that users cannot log in with invalid credentials. If the test passes and reports that the login feature is secure, but in reality, users can log in with invalid credentials, this is a false negative. Causes: Insufficient Test Coverage: The test might not cover all possible scenarios or edge cases, allowing some defects to slip through. Test Script Errors: The test script might have flaws or omissions that prevent it from detecting certain issues. Data Issues: Using incorrect or incomplete test data might lead to missed defects. Configuration Problems: Incorrect configurations in the test environment can cause some issues to remain unde...

False Positive

 A false positive in software testing is when a test incorrectly indicates the presence of a defect or issue in the software when there actually is none. This means the test result shows a problem that doesn't exist, leading to unnecessary investigation and troubleshooting. Example: Imagine you have a test that checks if a login feature works correctly. The test is supposed to verify that users can log in with valid credentials. If the test fails and reports that the login feature is broken, but in reality, the login feature works perfectly fine, this is a false positive. Causes: Test Script Errors: The test script itself might have an error, causing it to fail even when the application works correctly. Environment Issues: Issues in the test environment, such as incorrect configurations or network problems, might lead to false positives. Timing Issues: Tests might run too quickly before the system has fully updated or responded, leading to incorrect failure reports. Impact: Wast...

Test Metrics

  Test Coverage Metrics: Test Case Coverage: This metric measures the percentage of the application's functionality covered by your test suite. A higher percentage indicates more comprehensive testing. Code Coverage: This metric signifies the percentage of code executed by your automated tests. It helps identify areas of the code that might not be receiving enough testing. Branch Coverage: This metric delves deeper, measuring the percentage of conditional branches (if/else statements) in your code exercised by tests. It helps ensure different code paths are being tested. Test Execution Metrics: Number of Test Cases Executed: This is the simple count of test cases run during a test execution cycle. Test Execution Pass Rate: This metric calculates the percentage of test cases that passed successfully. A high pass rate signifies good test quality and potentially fewer defects. Test Execution Fail Rate: This metric is the opposite of the pass rate, indicating the percentage of t...

100% Percent Test Coverage

  Testing Methodologies: Shift Left Testing: Focus on testing early in the development lifecycle, identifying and fixing bugs during development rather than later stages. This includes unit testing, code reviews, and static analysis tools. Agile Testing: Integrate testing closely with development in an iterative approach. As features are developed, tests are written and executed concurrently. Exploratory Testing: Encourage testers to explore the application freely, looking for unexpected behavior and usability issues. Model-Based Testing: Create models that represent the expected behavior of the system and use them to automate test case generation. Testing Techniques: Equivalence Partitioning: Divide the input space into valid and invalid partitions based on expected behavior. Create test cases for each partition. Boundary Value Analysis: Test cases around the edges or boundaries of valid input ranges (e.g., minimum, maximum values). Error Guessing: Based on your knowle...

MDR Process

 MDR (Minimum Detectable Rate) process in testing typically refers to a statistical concept used in quality assurance and testing, especially in the context of manufacturing, healthcare, or clinical trials. It determines the smallest rate of occurrence that can be reliably detected by a test or inspection process. Here’s an overview of the MDR process in testing: Overview of the MDR Process Defining the Scope: Identify the test or inspection process where MDR is to be applied. Determine the specific defect or condition that needs to be detected. Statistical Analysis: Use statistical methods to determine the smallest rate of occurrence that can be detected given the sample size and confidence level. Calculate the MDR based on the acceptable level of risk (alpha level, typically 5%) and the desired power of the test (1 - beta level, typically 80%). Setting the Detection Threshold: Establish the detection threshold based on the MDR calculation. Ensure that the test process is sensitiv...

Fail Fast

  "Fail fast" is a concept and practice commonly used in software development, testing, and other engineering disciplines. It refers to the idea of designing systems and processes to detect and handle errors as early as possible. The goal is to minimize the time and effort spent on flawed paths and to ensure that issues are identified and addressed quickly, which can save resources and prevent cascading failures. Key Points of the "Fail Fast" Principle Early Detection of Errors : Systems or processes should be designed to immediately identify when something goes wrong. This allows for quick correction and prevents the continuation of a flawed process. Immediate Feedback : By failing fast, developers and engineers get immediate feedback on what went wrong, enabling them to fix issues promptly and effectively. Improved Quality : Catching errors early can lead to higher overall quality, as problems are addressed before they propagate and cause more significant issues....

JavaFx

JavaFX Scenic View is a debugging tool specifically designed for JavaFX applications. It allows developers to inspect the visual hierarchy, properties, and styles of JavaFX nodes in real-time while the application is running. This can be incredibly helpful for debugging layout and styling issues, as well as understanding the structure of complex UIs. Live Inspection : Developers can hover over UI elements in their JavaFX application to see information about the node, including its type, ID, CSS styles, and properties. Node Tree : Scenic View provides a visual representation of the node hierarchy, allowing developers to understand the parent-child relationships between nodes. CSS Inspection : Developers can view and modify CSS styles applied to individual nodes, helping to diagnose styling issues and experiment with different styles. Layout Debugging : Scenic View includes tools for debugging layout issues, such as visualizing layout bounds, insets, and alignment constraints. Event Moni...

Interview Questions 2024 How to Explain Automation Framework Structure Q...

Image

Test Stratergy & Test Plan

 Test Strategy High level document Test Plan Detailed level plan  Test Strategy Include objectives, resources, tools, testing types, environments Test Plan Include objectives, resources, tools, testing types, environments, test cases, test data, exit and entry criteria Test Strategy Target audience is who want to get overview of test approach like managers, project manager or other stakeholders Test Plan Target audience is the members who involved in testing process

Exceptions You have Handled in Your previous automation project

Image
When the user needs to read and get the data from a file but cannot read the file due to some issues how to handle it? Throw new exception Catch the exception and print stackrace private String readFileAsString (String filePath) { try { return new String(Files. readAllBytes (Paths. get (filePath))); } catch (Exception e) { throw new RuntimeException( "Failed to read file: " + filePath, e); } } When the user cannot find the note in the note view. It can be that the note is not displayed totally or the note number we are using exceeds the number of notes displayed in the note view   It gives ArrayIndexOutOfBoundException As the solution uses a try-catch block Print the stack trace with the proper message When we add explicit waiting in the script. However, the user's expected condition is not fulfilled within the duration. What can we do? Use try-catch block Catch the timeout exception and print the stack trace When creating a new file  Use try-...

Design a New Automation Framework

 Handle data scripts and Data separately Coding Standards Reports

Challenging issues you have worked on

 Explain each challenging issue you have worked on? Issue 01  : We are implementing two REST APIs for loading data related to the notes history versions Challenge Identify how many versions normally a note can have Identify how many notes a single patient can have What are the most used note types How many keywords a single note can have What is the length of a single keyword How to create that much data within a short period Solution Need to have information from the customer side  Ask developers to prepare the SQL queries  Send the queries to the customer and retrieve the required data Based on the data prepare test data Issue 02 : We had a legal requirement to show deleted notes Challenge: On which views/modules we are going to show the data How do we handle existing data issues like mandatory/non-mandatory issues What are the context menu options we are going to implement Impact on the customer side since we are loading deleted notes How to test if our environmen...

Areas to focus when testing an issue

 What are the areas that need to be focused on when testing a story? Affected functional areas Technical Impact The fix going to make Dependency functional areas Database related changes Impact on the existing data Different resolutions Different languages Impact for printing Impact on Data Mining Backward Compatibility test Performance test User documents updation Issues reported UI changes

Issues Encountered and How did you fix it

Image
 In the Team CI Environment, the scripts are successful one day. the other day it was failing. The next day it is successful. What can be the reason? It can be an issue specific to the relevant machine like a Windows update pop-up or some kind of notification showing up Network issue Server down Browser issue Resource issues like memory, CPU, disk space Users are already logged in to the system Existing live sessions are there The regression suite is failing. How do we highlight it during the daily scrum? when giving updates mention failures Mention the investigation results Highlight the risk of failing and the urgency of fixing them immediately Ask to flag the current working issue and take a new task to the sprint as unplanned  Another QA member is on sick leave. Tomorrow is release day. How do you handle the situation? Inform the situation to the scrum master Ask someone to help with the testing Create multiple environments and run automation scripts in parallel Inform abo...

Improvement I have done

 To avoid repeating the same steps as a lot in each test script Create navigation page Create methods for navigation It reduces the number of lines in the script For login there are 4 lines with this approach there is only one line with params For visit admission there are 10 lines with this approach there is only one line with the params Separate data creation and pre-condition steps from test scripts Use dependsOnGroup, dependsOnMethod group annotations and create separate scripts for data creations If the data creation fails the test script will be skipped When adding a pull request add a pre-defined template format to give the correct idea about the fix Add "Before Fix"  Add "After Fix" Add "Additional method" field How to identify code-level issues in page object classes? Link Sonar Qube analysis Configure bit bucket that cannot merge pull requests with critical or blocker sonar issues Nightly installation fails if there are sessions or the database i...

Issued Encountered During Automation and How You Fixed Them

There are multiple test methods in the same class. There is an order they need to execute. Otherwise, it will cause script failure Method 01 : Update the test method name based on the alphabetical order   Method 02: Use priority attribute in TestNG  When loading a specific page, there is a script failure since a progress dialogue appears and disappears after a few millisecond's Add implicit wait Add explicit wait with Expected Conditions stalenessOf() There is a node tree. After adding a new node web driver cannot identify it because there is a scroll bar appears. Use the JavaScriptExecutor class and executeScript() method Use ScrollIntoView method Add exception handling for a few methods to avoid null pointer exception When the driver tries to expand the node but it is not possible to expand add a try-catch block When uploading an image and the user cannot identify the file add try-catch block statements When we want to know the exact time our sign or save actions happen...

Automation Framework Structure

Image
 Automation Framework Structure Programming language: Java Testing tool: Selenium Test Framework: TestNG Log details: Log4J Design pattern: Page Object Model Encapsulation : Keep fields private, methods are public Fluent builder pattern : methods Test Script repository: Test scripts are based on functional areas Inheritance : Test scripts are extended base class Inheritance: Pages are extended PageObject class Data-driven : Use CSV file to keep the data pom.xml file configurations for test script repository test project name <name>cam-taf-cos-cd-test</name>  14. Client path 15. Test suite name <cos.client.path>CHANGE ME!!</cos.client.path> <cos.client.debug.port> 950 </cos.client.debug.port> <test.suite.file.name>regression.xml</test.suite.file.name> 16. Dependency projects <dependency> <groupId>se.camb.qa</groupId> <artifactId>camb-taf-cos-cd-pageobjects</artifactId> </dependency> 17. Mail r...

Quality Assurance Wisdom Wave

 1. What are the phrases in SDLC? Requirement analysis Design Development Review Testing Integration 2. Why SDLC is important? It provides a basis for project planning, scheduling and estimating It provides a framework for a set of activities and deliverables It is a mechanism for project tracking and control It increases the visibility of project status to all the stakeholders It helps to identify project risks beforehand 3. What are the QA responsibilities in each SDLC phrase? Requirement analysis : Understand each requirement clearly and clarify when needed. Prepare traceability matrix for test case and requirement mapping Design: Comes up with Test strategy, what to test and how to test Development: Set up test environment and prepare test case documentation Review: Understand what has changed what was done by the developer and what impact, check whether there is UT, IT coverage and decide what remains to automate. Testing: Perform functional, and non-functional testi...