google.com, pub-8658045329707802, DIRECT, f08c47fec0942fa0
Upwork Test Answers: Get all the correct answers of most recent and possible Upwork Tests A to Z (Updated on Jan, 2016)
Cover Letter Templates: These cover letter samples are not only for Upwork job, but also you will have some idea about your real life job
 
Freelance Profile Overviews: Different Profile samples and overviews of experts, advanced and intermediate level freelancers
For Newbie of Upwork: Upwork Help - How to apply for a job in Upwork with 10 most important articles about Upwork

A to Z View - All Upwork Test Answers

Software Testing - Interview Questions and Answers

Here you will get all the possible answers of Software Testing Job Interview Questions and Answers

1. Are there more defects in the design phase or in the coding phase?

The design phase is more error prone than the execution phase. One of the most frequent defects which occur during design is that the product does not cover the complete requirements of the customer. Second is wrong or bad architecture and technical decisions make the next phase, execution, more prone to defects. Because the design phase drives the execution phase it's the most critical phase to test. The testing of the design phase can be done by good review. On average, 60% of defects occur during design and 40% during the execution phase.

2. Can you explain boundary value analysis?

In equivalence partitioning we identify inputs which are treated by the system in the same way and produce the same results. You can see from the following figure applications TC1 and TC2 give the same results (i.e., TC3 and TC4 both give the same result, Result2). In short, we have two redundant test cases. By applying equivalence partitioning we minimize the redundant test cases.

So apply the test below to see if it forms an equivalence class or not:

• All the test cases should test the same thing.
• They should produce the same results.
• If one test case catches a bug, then the other should also catch it.
• If one of them does not catch the defect, then the other should not catch it.

3. Can you explain requirement trace-ability and its importance?


In most organizations testing only starts after the execution/coding phase of the project. But if the organization wants to really benefit from testing, then testers should get involved right from the requirement phase.
  
If the tester gets involved right from the requirement phase then requirement trace-ability is one of the important reports that can detail what kind of test coverage the test cases have.

4. Can you explain the different methodology for the execution and the design process stages in Six Sigma?

The main focus of Six Sigma is to reduce defects and variations in the processes. DMAIC and DMADV are the models used in most Six Sigma initiatives. 

DMADV is the model for designing processes while DMAIC is used for improving the process. 
    
The DMADV model includes the following five steps:

• Define: Determine the project goals and the requirements of customers (external and internal).
• Measure: Assess customer needs and specifications.
• Analyze: Examine process options to meet customer requirements.
• Design: Develop the process to meet the customer requirements.
• Verify: Check the design to ensure that it's meeting customer requirements
The DMAIC model includes the following five steps:

• Define the projects, goals, and deliverables to customers (internal and external). Describe and quantify both the defects and the expected improvements.
• Measure the current performance of the process. Validate data to make sure it is credible and set the baselines.
• Analyze and determine the root cause(s) of the defects. Narrow the causal factors to the vital few.
• Improve the process to eliminate defects. Optimize the vital few and their interrelationships.
• Control the performance of the process. Lock down the gains.

5. Can you explain the PDCA cycle and where testing fits in?

Software testing is an important part of the software development process. In normal software development there are four important steps, also referred to, in short, as the PDCA (Plan, Do, Check, Act) cycle.

Let's review the four steps in detail.

• Plan: Define the goal and the plan for achieving that goal.
• Do/Execute: Depending on the plan strategy decided during the plan stage we do execution accordingly in this phase.
• Check: Check/Test to ensure that we are moving according to plan and are getting the desired results.
• Act: During the check cycle, if any issues are there, then we take appropriate action accordingly and revise our plan again.
       
So developers and other stakeholders of the project do the "planning and building," while testers do the check part of the cycle. Therefore, software testing is done in check part of the PDCA cyle.

6. Difference between Manual and Automation Testing

Difference between Manual and Automation Testing is the pillar of Software testing, because whole testing is based on Manual and Automation Testing. In a project you can do either Manual or Automation Testing and you can also do both Manual and Automation Testing simultaneously.

Complete Difference between Manual and Automation Testing

Manual Testing                      Automation Testing              1. Manual Testing is a process which is done manually.      1. Automation Testing is a process which is done by the help of automated tools.              2. All the famous phases of STLC like test planning, test deployment, result analysis, test execution, bug tracking and reporting tools are obviously comes under the category of Manual Testing and done successfully by human efforts.      2. In Automation Testing all the popular phases of STLC are done by various open sources and purchased tools like Selenium, J meter, QTP, Load Runner, Win Runner and so on.              3. Manual Testing is a start of Testing, without this testing we can’t start Automation Testing.      3. Automation Testing is a continuous part of Manual Testing.              4. In Manual Testing testers are allowed to do Random Testing to find the Bugs.     4. In Automation Testing we always test through Running Scripts.              5. In Manual Testing we find more bugs than automation by Error Guessing.      5. In Automation Testing we test the repetitive functionalities of the application.              6. It takes lot of time.      6. It takes less time.              7. Manual Testing would be run sequentially.      7. Automation Testing is done on different machines at same time.              8. Regression Testing process is tough in Manual Testing      8. Regression Testing process is easy in Automation Testing by Tools.              9. It is not expensive.      9. It is expensive.              10. More testers are required in Manual Testing because in this testing test cases need to be executed manually.      10. Few testers are required in Automation Testing because in this testing test cases need to be executed by using Automation Tools.              11. It gives low accuracy result.      12. It gives high accuracy result.              12. It is considered as low quality.      12. It is considered as high quality.              13. In this Testing we cannot do batch testing.      13. In this Testing we can do multiple types of batch testing.              14. It is considered as less reliable.      14. It is considered as more reliable.              15. No need of programming in Manual Testing.      15. Need of programming is must in Automation Testing.              16. It is done without interaction of any Tool.      16. It is always done using tools.

7. Does automation replace manual testing?

Automation is the integration of testing tools into the test environment in such a manner that the test execution, logging, and comparison of results are done with little human intervention. A testing tool is a software application which helps automate the testing process. But the testing tool is not the complete answer for automation. One of the huge mistakes done in testing automation is automating the wrong things during development. Many testers learn the hard way that everything cannot be automated. The best components to automate are repetitive tasks. So some companies first start with manual testing and then see which tests are the most repetitive ones and only those are then automated.
    
As a rule of thumb do not try to automate:

• Unstable software: If the software is still under development and undergoing many changes automation testing will not be that effective.
• Once in a blue moon test scripts: Do not automate test scripts which will be run once in a while.
• Code and document review: Do not try to automate code and document reviews; they will just cause trouble.
All repetitive tasks which are frequently used should be automated. For instance, regression tests are prime candidates for automation because they're typically executed many times. Smoke, load, and performance tests are other examples of repetitive tasks that are suitable for automation. White box testing can also be automated using various unit testing tools. Code coverage can also be a good candidate for automation.

8. Explain Unit Testing, Integration Tests, System Testing and Acceptance Testing?

Unit testing - Testing performed on a single, stand-alone module or unit of code.
  
Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between modules.
  
System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.
  
Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).

9. Given the following fragment of code, how many tests are required for 100% decision coverage?

if width > length

   thenbiggest_dimension = width

     if height > width

             thenbiggest_dimension = height

     end_if

elsebiggest_dimension = length  

            if height > length 

                thenbiggest_dimension = height

          end_if

end_if

10. How do you define a testing policy?

The following are the important steps used to define a testing policy in general. But it can change according to your organization. Let's discuss in detail the steps of implementing a testing policy in an organization.

• Definition: The first step any organization needs to do is define one unique definition for testing within the organization so that everyone is of the same mindset.
• How to achieve: How are we going to achieve our objective? Is there going to be a testing committee, will there be compulsory test plans which need to be executed, etc?.
• Evaluate: After testing is implemented in a project how do we evaluate it? Are we going to derive metrics of defects per phase, per programmer, etc. Finally, it's important to let everyone know how testing has added value to the project?.
• Standards: Finally, what are the standards we want to achieve by testing? For instance, we can say that more than 20 defects per KLOC will be considered below standard and code review should be done for it.

11. How does load testing work for websites?

Websites have software called a web server installed on the server. The user sends a request to the web server and receives a response. So, for instance, when you type www.google.com the web server senses it and sends you the home page as a response. This happens each time you click on a link, do a submit, etc. So if we want to do load testing you need to just multiply these requests and responses "N" times. This is what an automation tool does. It first captures the request and response and then just multiplies it by "N" times and sends it to the web server, which results in load simulation.

So once the tool captures the request and response, we just need to multiply the request and response with the virtual user. Virtual users are logical users which actually simulate the actual physical user by sending in the same request and response. If you want to do load testing with 10,000 users on an application it's practically impossible. But by using the load testing tool you only need to create 1000 virtual users.

12. On what basis is the acceptance plan prepared?

In any project the acceptance document is normally prepared using the following inputs. This can vary from company to company and from project to project.

• Requirement document: This document specifies what exactly is needed in the project from the customers perspective.
• Input from customer: This can be discussions, informal talks, emails, etc.
• Project plan: The project plan prepared by the project manager also serves as good input to finalize your acceptance test.

13. Should testing be done only after the build and execution phases are complete?

In traditional testing methodology testing is always done after the build and execution phases.But that's a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution. In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.Therefore, Testing should occur in conjunction with each phase of the software development.

14. What are different types of verifications?

Verification is static type of s/w testing. It means code is not executed. The product is evaluated by going through the code. Types of verification are:    

• Walkthrough: Walkthroughs are informal, initiated by the author of the s/w product to a colleague for assistance in locating defects or suggestions for improvements. They are usually unplanned. Author explains the product; colleague comes out with observations and author notes down relevant points and takes corrective actions.
• Inspection: Inspection is a thorough word-by-word checking of a software product with the intention of Locating defects, Confirming traceability of relevant requirements etc.

15. What are Test comparators?

Is it really a test if you put some inputs into some software, but never look to see whether the software produces the correct result? The essence of testing is to check whether the software produces the correct result, and to do that, we must compare what the software produces to what it should produce. A test comparator helps to automate aspects of that comparison.

16. What are the categories of defects?

There are three main categories of defects:

• Wrong: The requirements have been implemented incorrectly. This defect is a variance from the given specification.
• Missing: There was a requirement given by the customer and it was not done. This is a variance from the specifications, an indication that a specification was not implemented, or a requirement of the customer was not noted properly.
• Extra: A requirement incorporated into the product that was not given by the end customer. This is always a variance from the specification, but may be an attribute desired by the user of the product. However, it is considered a defect because it's a variance from the existing requirements.

17. What are the different Methodologies in Agile Development Model?

There are currently seven different agile methodologies that I am aware of:

• Extreme Programming (XP)
• Scrum
• Lean Software Development
• Feature-Driven Development
• Agile Unified Process
• Crystal
• Dynamic Systems Development Model (DSDM)

18. What are the Experience-based testing techniques?

In experience-based techniques, people's knowledge, skills and background are a prime contributor to the test conditions and test cases. The experience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. Due to previous experience with similar systems, they may have insights into what could go wrong, which is very useful for testing.

19. What are the Structure-based (white-box) testing techniques?

Structure-based testing techniques (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are commonly called 'white-box' or 'glass-box' techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software.

20. What is Automated Testing?

Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and test reporting functions.

21. What is Boundary value testing?

Test boundary conditions on, below and above the edges of input and output equivalence classes. For instance, let say a bank application where you can withdraw maximum Rs.20,000 and a minimum of Rs.100, so in boundary value testing we test only the exact boundaries, rather than hitting in the middle.  That means we test above the maximum limit and below the minimum limit.

22. What is component testing?

Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.

23. What is DRE?

To measure test effectiveness a powerful metric is used to measure test effectiveness known as DRE (Defect Removal Efficiency) From this metric we would know how many bugs we have found from the set of test cases. Formula for calculating DRE is

DRE=Number of bugs while testing  / number of bugs while testing + number of bugs found by user

24. What is Exploratory Testing and when should it be performed?

The definition of Exploratory Testing is “simultaneous test design and execution” against an application. This means that the tester uses her domain knowledge and testing experience to predict where and under what conditions the system might behave unexpectedly. As the tester starts exploring the system, new test design ideas are thought of on the fly and executed against the software under test.

On an exploratory testing session, the tester executes a chain of actions against the system, each action depends on the result of the previous action, hence the outcome of the result of the actions could influence what the tester does next, therefore the test sessions are not identical.

This is in contrast to Scripted Testing where tests are designed beforehand using the requirements or design documents, usually before the system is ready and execute those exact same steps against the system in another time.

Exploratory Testing is usually performed as the product is evolving (agile) or as a final check before the software is released. It is a complimentary activity to automated regression testing.

25. What is random/monkey testing? When it is used?

Random testing often known as monkey testing. In such type of testing data is generated randomly often using a tool or automated mechanism. With this randomly generated input the system is tested and results are analysed accordingly. These testing are less reliable; hence it is normally used by the beginners and to see whether the system will hold up under adverse effects.

26. What is risk-based testing?

Risk-based testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.

27. What is the difference between Testing Techniques and Testing Tools?

Testing technique: – Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.

Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing

28. What is the difference between UAT (User Acceptance Testing) and System testing?

System Testing: System testing is finding defects when the system under goes testing as a whole, it is also known as end to end testing. In such type of testing, the application undergoes from beginning till the end.

UAT: User Acceptance Testing (UAT) involves running a product through a series of specific  tests which determines whether the product wil meet the needs of its users.

29. What is the difference between white box, black box, and gray box testing?

Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested.
  
White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.
  
There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.

30. What is white box testing and list the types of white box testing?

White box testing technique involves selection of test cases based on an analysis of the internal structure (Code coverage, branches coverage, paths coverage, condition coverage etc.)  of a component or system. It is also known as Code-Based testing or Structural testing.  Different types of white box testing are

  • Statement Coverage
  • Decision Coverage

31. What Test Techniques are there and what is their purpose?

Test Techniques are primarily used for two purpose: a) To help identify defects, b) To reduce the number of test cases.

• Equivalence partitioning is mainly used to reduce number of test cases by identifying different sets of data that are not the same and only executing one test from each set of data
• Boundary Value Analysis is used to check the behaviour of the system at the boundaries of allowed data.
• State Transition Testing is used to validate allowed and disallowed states and transitions from one state to another by various input data
• Pair-wise or All Pairs Testing is a very powerful test technique and is mainly used to reduce the number of test cases while increasing the coverage of feature combinations.

32. Which is the best testing model?

In real projects, tailored models are proven to be the best, because they share features from The Waterfall, Iterative, Evolutionary models, etc., and can fit into real life time projects. Tailored models are most productive and beneficial for many organizations. If it's a pure testing project, then the V model is the best.

33. Why we use decision tables?

The techniques of equivalence partitioning and boundary value analysis are often applied to specific situations or inputs. However, if different combinations of inputs result in different actions being taken, this can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification-based techniques, decision tables and state transition testing are more focused on business logic or business rules. A decision table is a good way to deal with combinations of things (e.g. inputs). This technique is sometimes also referred to as a 'cause-effect' table. The reason for this is that there is an associated logic diagramming technique called 'cause-effect graphing' which was sometimes used to help derive the decision table

No comments:

Post a Comment