log.info("Step:Click 'Add keyword' button in Keyword editor window. [Class#Method]");
21. JavaDoc for each method and whole class
/** * Class name and description * * @author * @version 1.0 * @since 26/12/2019 */
/** * Select Record in Table using row_id * * @param row_id - row id of the required row to select * @param columnName - column name * @throws FrameworkException */
22. Fluent builder pattern in each method: Return type is always class name
23. Configurations in environment.xml file
* Manage project configurations under **```environment.xml```** file, > - ```<embedded-mode></embedded-mode>``` **```: If 'ON' test project will support chromium emmberdard mode, if 'OFF' test project only support swing gui mode or pure web```** > - ```<app-mode></app-mode>``` **```: If 'GUI' test project support for pure swing or chrommium emmberdard mode, if 'WEB' test project support only for pure web application```** > - ```<dashboard>OFF</dashboard>``` **```: If 'OFF' while during the execution listeners will disable and not collect execution results for dashboard```**
24. chrome driver is located under the resources folder
25. log4j.xml file is located in the resources folder
26. Data input files like images and pdf are located in the resources folder
A false positive in software testing is when a test incorrectly indicates the presence of a defect or issue in the software when there actually is none. This means the test result shows a problem that doesn't exist, leading to unnecessary investigation and troubleshooting. Example: Imagine you have a test that checks if a login feature works correctly. The test is supposed to verify that users can log in with valid credentials. If the test fails and reports that the login feature is broken, but in reality, the login feature works perfectly fine, this is a false positive. Causes: Test Script Errors: The test script itself might have an error, causing it to fail even when the application works correctly. Environment Issues: Issues in the test environment, such as incorrect configurations or network problems, might lead to false positives. Timing Issues: Tests might run too quickly before the system has fully updated or responded, leading to incorrect failure reports. Impact: Wast...
Testing Methodologies: Shift Left Testing: Focus on testing early in the development lifecycle, identifying and fixing bugs during development rather than later stages. This includes unit testing, code reviews, and static analysis tools. Agile Testing: Integrate testing closely with development in an iterative approach. As features are developed, tests are written and executed concurrently. Exploratory Testing: Encourage testers to explore the application freely, looking for unexpected behavior and usability issues. Model-Based Testing: Create models that represent the expected behavior of the system and use them to automate test case generation. Testing Techniques: Equivalence Partitioning: Divide the input space into valid and invalid partitions based on expected behavior. Create test cases for each partition. Boundary Value Analysis: Test cases around the edges or boundaries of valid input ranges (e.g., minimum, maximum values). Error Guessing: Based on your knowle...
Comments
Post a Comment