2.2. Information Required

Sources of Information

Business Analysts

Users generally do not have time to focus on communicating with developers for extended periods of time. They are (hopefully) too busy using the system. To allow users the ability to get on with the work of using the system, user advocates are used that can focus requests and engage with developers on behalf of the users.
The role of the Business Analyst is to gather as much of the required information from the user, in as short a time as possible. Analysts, offer the ability to discuss a vague description by a user, and based on experience, translate a vague request into a tangible description. From an Analyst’s perspective, you know you have done your job right, when the user  is having trouble describing their idea, but excitedly exclaims: “Yes, that’s exactly what I meant!”
The Use Cases and User stories generated by the Business Analyst are descriptions of expected behaviour; and descriptions of expected behaviour are testable statements, the key to ongoing quality assurance.

Developers

Nobody knows the system better than the people that built it. Developers know the edge cases and the expected behaviour that they put into the system. Who else can do a better job of testing the marginal areas?
The key is that developers are the ones that make the expectations a reality. Therefore they should document what they did. This will allow other stakeholders to review their interpretation for accuracy, completeness, and most importantly sensibleness.
One of the key complaints often heard from developers is that they didn’t receive a complete specification. Generally, the user requesting a new feature, has made a request for a new feature, but has not given the developer sufficient information to complete the task. The lack of information comes from a simple core problem. Anytime, anyone makes a feature request, they have an image in their head of what they want; english is an imprecise language; therefore their description is likely insufficient for a computer to understand. Developers see this problem when they realize that a statement can be interpreted in more than one way.
While the problem is not solvable, it is reducible.
By having developers define their understanding of the expectations, we have a mechanism for communication: the developer redefines the specification to a technical level; if they are unable, there is insufficient information; if their definition doesn’t meet the expectations of the requester, there was insufficient information. This lack of information is not a failure, it is feedback that the specifications were not precise enough.Once the feedback is generated, action can be taken.
As a result, when teams consisting solely of programmers attack a problem, they prefer to express their solution in code, rather than in documents. They would much rather dive in and write code than produce a spec first. … My pet theory is that this problem can be fixed by teaching programmers to be less reluctant writers by sending them off to take an intensive course in writing. Another solution is to hire smart program managers who produce the written spec. In either case, you should enforce the simple rule "no code without spec". (Joel Spolsky)
While I agree with Joel regarding the importance of Specs, I also believe that writing tests is an excellent form of specification definition that the Developers need to be involved in. It is boring and tedious work, but it keeps Developers focussed on the problem. It forces developers to think about what they are attempting to achieve. Therefore it is not optional for Developers to write specifications, it is part of the planning process. All we can hope to do is reduce the pain experienced, a Test Driven approach reduces this pain.
Whatever approach you take, it is important to recognize that Developers are part of the Testing Team. As the first people to see new features, they are the first line of expectation conformance validation.

Users

Error reports from users are an invaluable tool in the process of test definition. Every error report that comes in from users is a test that has been performed on the system and failed.
While we aim to shield our users from errors, we should not lose sight of the value of their input. In his paper “The Cathedral and the Bazaar”, Eric S. Raymond identifies that it is not possible for any organization, with any amount of time or resources, to achieve the level of detailed testing that a brief release in front of actual users can achieve. This culminated in the concept of “Release Early, Release Often”.  
While our objective is to achieve zero defects, we can also note that an error report from a user is a test definition we have already paid for (generally with embarrassment that they saw something bad). If an embarrassing test (user finding a bug) has happened, the best thing we can do is learn from our mistake. That is where an effective error report comes in.
A good error report from a user (as guided by support) takes a form very similar to a test with the same focus (https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines):
  1. Objective
  2. Steps to Reproduce
  3. Expected Outcome
  4. Actual Outcome
This format of information collection is nearly identical to the format of information required for testing with only “Steps to Reproduce”
The Test Engineer should take advantage of this, and any process should include a copy of every uniquely identified bug, being sent to the Testers for inclusion in their tests. The users and production support just saved the Test Engineer time, in identifying and documenting the nature of the bug.
To take this a step further, the point to testers is to produce bug reports. Really, another way to define tests is as predefined error reports.

Users, and production support, are part of your testing team. They are the two sides of the same coin, downright conjoined twins.