The because of emergent category which sounds very

The terms purposeful and theoretical are viewed synonymously and used interchangeably in the literature. Morse (1991) suggests that the lack of clear guidelines on principles for selection of a sample has resulted in much confusion in qualitative research. Researchers has been criticized for not describing their sampling strategies detail, which makes interpretation of findings difficult and affects replication study ( Kitson et. al. 1982). Concern has been voiced by some researchers regarding ‘method slurring’ in qualitative research.

 

The terms ‘purposeful’, ‘selective’ and ‘theoretical’ sampling are used interchangeably in literature. According to Chambers dictionary (1983) select- ‘pick out a number by preference’ and selective- ‘having or exercising power of selection: able to discriminate’. Purpose- ‘power of seeking the end desired’ and purposeful as ‘directed towards a purpose’. Theoretical is defined as ‘pertaining, according to to theory: not practical: speculative.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

It is difficult to discuss theoretical sampling without referring to the grounded theory method, as theoretical sampling is a central tenet of the method. The writings of Morse, Sandelowski and Patton have been explored for their description of qualitative sampling and interpretation of theoretical sampling in particular.

 

Selective Sampling and Purposeful Sampling

 

Schatzman & Strauss (1973) suggest that after several observation visits to the sites, the researcher will know who to sample for the purpose of the study. They proceed to discuss sampling of time, locations, events and people. According to Patton (1990) the ‘logic and power of purposeful sampling lies in selecting information-rich cases for study in depth. Selective sampling therefore may be seen to mean purposeful sampling. Further Schatzman & Strauss (1973) point out that as the study progresses, new categories may be discovered which would lead the researcher to kore sampling in that particular dimension. This is because of emergent category which sounds very similar to what happens in theoretical sampling. Glaser (1978) makes the distinction that selective sampling refers to the calculated decision to sample a specific locale according to a preconceived but reasonable initial set of dimensions (such as time, space, identity or power) which are worked out in advance for a study. But ‘the analyst who uses theoretical sampling cannot know in advance precisely what to sample for and where it will lead him’ (p. 37).

 

Theoretical sampling

 

Theoretical sampling seems to have originated with the discovery of grounded theory by Glaser and Strauss in 1967. At the time when they were developing this theory there was a prevailing paradigm of verification in qualitative research. Grounded theory is a systematic method where a continuous analysis takes place. It seems that the authors had to use quantitative terminology in order to make their new method more acceptable to quantitative sociologists. The central focus of grounded theory is the development of theory through constant comparative analysis of data gained from theoretical sampling. Glaser(1978) defined theoretical sampling as ‘ the process of data collection for generating theory whereby the analyst jointly collects, codes next and where to find them, in order to develop his theory as it emerges’ (p. 36).

 

Sampling takes place at two stages in grounded theory’s data collection. In the initial stage it resemble the purposeful sampling as researcher will go to the groups which they believe will maximize the possibilities of obtaining data and leads more data on their question. But it has been argued that in theoretical sampling the sample is ‘not selected from the population based on certain variables prior to the study, rather the initial sample is determined to examine the phenomena where it is found to exist’ ( Chenitz and Swanson, 1986).

Flexibility of theoretical sampling

 

Theoretical sampling allows for flexibility during the research process (Glaser 1978). The researcher can make shifts of plan and emphasis early in the research process so that the data gathered reflects what is occurring in the field rather than speculation about what cannot or should have been observed. Further sampling is done to develop the categories and their relationships and interrelationships. The emerging categories could lead the researcher to samples in different locations. The aim is to achieve depth in the developing categories. The emerging categories may indicate that the researcher proceeds to another location to sample there that would increase breadth in the category.

 

Examination of the Variations in qualitative sampling

 

Strauss & Corbin(1990) elaborate on the process of theoretical sampling by describe open sampling, rational and variational sampling and discriminate sampling. Open sampling is sampling those persons, places, situations that will provide the great opportunity to gather the most relevant data about the phenomenon under investigation. Relational and variational samplings involve moving from situation to situation, gathering data on theoretically relevant categories and choose persons, sites, or documents that maximize opportunities to elicit data regarding variations along dimensions of categories. In discriminate sampling researcher chooses ‘the sites, persons, and documents that will maximize opportunities for verifying the story line, relationships between categories and for filling in poorly developed categories. Glaser critically discouraged this division.

 

All sampling are purposeful in qualitative research

 

In Patton,s view (1990), all types of sampling in qualitative research are purposeful sampling. He describes 15 different strategies for purposefully selecting information rich cases, refer below table. The underlying principle that is common to all these strategies is selecting information rich cases that are cases that are selected purposefully o fit the study. He didn’t speak about theoretical sampling, some similarities may be seen I his description of confirming and disconfirming cases.

 

Purposeful (or theoretical) sampling

 

Morse(1991) suggests that four types of sampling are used in qualitative research: the purposeful sample, the nominated sample, the volunteer sample and the sample that consists of the total population. She states that ‘when obtaining a purposeful (or theoretical) sample, the researcher selects a participant according to the needs of the study. Morse sees both purposeful and theoretical sampling as being synonymous with each other. Sandelowski views all sampling in qualitative research as purposeful and suggests three kinds of purposeful sampling: maximum variation, phenomenal variation and theoretical variation. He suggested that the maximum variation is one of the most frequently employed kinds of purposeful sampling and researcher wanting maximum variation in their sample must decide what kind(s) of variation they want to maximize and when to maximize each kind. Examples of variation may be race. Class, gender or other personal-related characteristics. Phenomenal variation is variation of the targets phenomenon under study and decision to seek phenomenal variation is often made a priori in order to have representative coverage of variables likely to be important in understanding how diverse factors configure as a whole. This sampling resembles the Morse purposeful sampling. However it may be argued that Morse’s use of the phrase ‘or theoretical’ in connection with purposeful sampling is ambiguous.(Sandelowski, 1995) describes theoretical variation as variation on a theoretical construct that is associated with theoretical sampling, or the sampling on analytic grounds characteristics of grounded theory studies. Thus theoretical sampling may be seen as a variation of purposeful sampling but all purposeful sampling is not necessarily theoretical sampling.

 

Conclusion:

Theoretical sampling is always purposeful and it could be said that some qualitative studies mat contain purposeful and theoretical sampling. However, other studies may contain only purposeful sampling since purposeful sampling is not always theoretical. It may be acceptable to view theoretical sampling as a variant within purposeful sampling. Glaser (1992) stated that theoretical sapling in grounded theory is the process by which data collection is continually guided(p.102). Therefore a more accurate term for theoretical sampling could be ‘analysis driven purposeful sampling’ or ‘analysis governed purposeful sampling’. According to Glaser (1978) the discovery of grounded theory implicitly assumes that the analyst will be creative (p.20). This author argues for researcher to be more adaptable and creative in designing sampling strategies that are aimed at being responsive to real-world conditions and that meet the information needs of the study. The rejection of reliability and validity in qualitative inquiry in the 1980s has resulted in an interesting shift for “ensuring rigor” from the investigator’s actions during the course of the research, to the reader or consumer of qualitative inquiry. The emphasis on strategies that are implemented during the research process has been replaced by strategies for evaluating trustworthiness and utility that is implemented once a study is completed.

 

Without rigor, research is worthless, becomes fiction, and loses its utility. Challenges to rigor in qualitative inquiry interestingly paralleled the blossoming of statistical packages and the development of computing systems in quantitative research. Rather than explicating how rigor was attained in qualitative inquiry, a number of leading qualitative researchers argued that reliability and validity were terms pertaining to the quantitative paradigm and were not pertinent to qualitative inquiry (Altheide & Johnson, 1998; Leininger, 1994).

 

 In seminal work in the 1980s, Guba and Lincoln substituted reliability and validity with the parallel concept of “trustworthiness,” containing four aspects: credibility, transferability, dependability, and conformability. Within these were specific methodological strategies for demonstrating qualitative rigor, such as the audit trail, member checks when coding, categorizing, or confirming results with participants, peer debriefing, negative case analysis, structural corroboration, and referential material adequacy (Guba & Lincoln, 1981; Lincoln & Guba, 1985; Guba & Lincoln, 1982).

 

Credibility: The credibility criteria involves establishing that the results of qualitative research are credible or believable from the perspective of the participant in the research.

 

Transferability: Transferability refers to the degree to which the results of qualitative research can be generalized or transferred to other contexts or settings.

 

Dependability: showing that the findings are consistent and could be repeated.

 

Confirmabilit : Confirmability refers to the degree to which the results could be confirmed or corroborated by others.

 

Strategies to ensure rigor inherent in the research process itself were back staged to these new criteria. This shift from constructive (during the process) to evaluative (post hoc) procedures occurred subtly and incrementally. Now, there is often no distinction between procedures that determine validity in the course of inquiry and those that provide research outcomes with such credentials. We are also concerned that by refusing to acknowledge the centrality of reliability and validity in qualitative methods, qualitative methodologists have inadvertently fostered the default notion that qualitative research must therefore be unreliable and invalid, lacking in rigor, and unscientific (Morse, 1999).

 

Reliability and Validity

The nature of knowledge within the rationalistic (or quantitative) paradigm is different from the knowledge in naturalistic (qualitative) paradigm. Consequently, each paradigm requires paradigm-specific criteria for addressing “rigor” (the term most often used in the rationalistic paradigm) or “trustworthiness”, their parallel term for qualitative “rigor”. They noted that, within the rationalistic paradigm, the criteria to reach the goal of rigor are internal validity, external validity, reliability, and objectivity. On the other hand, they proposed that the criteria in the qualitative paradigm to ensure “trustworthiness” are credibility, fittingness, auditability, and confirmability (Guba & Lincoln, 1981).

 

 

Traditional Criteria for Judging Quantitative Research

Alternative Criteria for Judging Qualitative Research

internal validity

credibility

external validity

transferability

reliability

dependability

objectivity

confirmability

They recommended specific strategies be used to attain trustworthiness such as negative cases, peer debriefing, prolonged engagement and persistent observation, audit trails and member checks. Also important were characteristics of the investigator, who must be responsive and adaptable to changing circumstances, holistic, having processional immediacy, sensitivity, and ability for clarification and summarization (Guba & Lincoln, 1981).

 

(Techniques for establishing credibility:

Prolonged Engagement : Spending sufficient time in the field to learn or understand the culture, social setting, or phenomenon of interest. 

 

Persistent Observation: the purpose of persistent observation is to identify those characteristics and elements in the situation that are most relevant to the problem or issue being pursued and focusing on them in detail.  If prolonged engagement provides scope, persistent observation provides depth” (Lincoln & Guba, 1985, p. 304).

 

Triangulation : Triangulation involves using multiple data sources in an investigation to produce understanding.

 

Peer debriefing: Through analytical probing a debriefer can help uncover granted biases, perspectives and assumptions on the researcher’s part.

 

Negative case analysis: This involves searching for and discussing elements of the data that do not support or appear to contradict patterns or explanations that are emerging from data analysis. 

 

Referential adequacy: Keeping a portion of raw data and archive it to allow the researcher and other critics to access it later for the purpose of testing analysis of the material.

 

Member-checking: This is when data, analytic categories, interpretations and conclusions are tested with members of those groups from whom the data were originally obtained.

 

Techniques for establishing transferability

Thick description: It refers to the detailed account of field experiences

 

Techniques for establishing dependability

Inquiry audit: It involves having a researcher not involved in the research process examine both the process and product of the research study.

 

Techniques for establishing confirmability

Confirmability audit : It involves having a researcher not involved in the research process examine both the process and product of the research study.

 

Audit trail : An audit trail is a transparent description of the research steps taken from the start of a research project to the development and reporting of findings.

 

Triangulation : A single method can never adequately shed light on a phenomenon.  Using multiple methods can help facilitate deeper understanding.

 

Reflexivity: While some may see these different ways of knowing as a reliability problem, others feel that these different ways of seeing provide a richer, more developed understanding of complex phenomena. )

 

This resulted in a plethora of terms and criteria introduced for minute variations and situations in which rigor could be applied. Perhaps as a result of this lack of clarity, standards were introduced in the 1980’s for the post hoc evaluation of qualitative inquiry (see Creswell, 1997)

 

Problems with post-hoc evaluation (development of Standards)

 

While standards are a comprehensive approach to evaluating the research as a whole, they remain primarily reliant on procedures or checks by reviewers to be used following completion of the research. But using standards on completion of the project at a time is of least importance as by then it is too late to correct problems.

 

Compounding the problem of duplicate terminology is the trend to treat standards, goals, and criteria synonymously. For example, Yin (1994) describes trustworthiness as a criterion to test the quality of research design, while Guba and Lincoln (1989) refer to it as a goal of the research. While strategies of trustworthiness may be useful in attempting to evaluate rigor, they do not in themselves ensure rigor. While standards are useful for evaluating relevance and utility, they do not in themselves ensure that the research will be relevant and useful.

 

We argue that strategies for ensuring rigor must be built into the qualitative research process per se. These strategies include investigator responsiveness, methodological coherence, theoretical sampling and sampling adequacy, an active analytic stance, and saturation.

 

Verification Strategies in Qualitative Research

 

In qualitative research, verification refers to the mechanisms used during the process of research to incrementally contribute to ensuring reliability and validity and, thus, the rigor of a study.

 

Investigator Responsiveness:  It is the researcher’s creativity, sensitivity, flexibility and skill. The lack of responsiveness of the investigator at all stages of the research process is the greatest hidden threat to validity. Lack of knowledge may be due to – overly adhering to instructions, the inability to abstract, working deductively from previously held assumptions.

 

Ensuring methodological coherence

Congruence between the research question and the components of the method is to be ensured.  Data may demand to be treated differently so that the question may have to be changed or methods modified.

 

Sampling sufficiency/ Sampling adequacy: It is evidenced by saturation and replication (Morse, 1991), means that sufficient data to account for all aspects of the phenomenon have been obtained.

Collecting and analyzing data concurrently: the iterative interaction between data and analysis is the essence of attaining reliability and validity.

 

Thinking theoretically and development: requires macro-micro perspectives, constantly checking and rechecking, and building a solid foundation.

 

Conclusion – To validate is to investigate, to check, to question, and to theorize. There is a need to refocus for ensuring rigor and place responsibility with the investigator rather than external judges of the completed product.