Research Methodology About Questionnaire Design Psychology Essay

Modified: 1st Jan 2015
Wordcount: 4808 words

Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers.
This essay may contain factual inaccuracies or out of date material. Please refer to an authoritative source if you require up-to-date information on any health or medical issue.

Cite This

A structured questionnaire was designed for data gathering based on the several principals recommended by Cavana et al. (2001).

4.5.1. Principals of Wording of the Questionnaire

The nature of the variable tapped, i.e. objective facts or subjective feelings, determines the type of questions will be asked. In this research, where objective variables such as demographic of respondents are used, a single direct question with an ‘ordinal scaled’ set of categories has been utilized. For example:

How long have you been using the ERP system?

About

one year

2 years

3 years

More than 3 years

If the variables are exploited in a subjective form, where respondents’ attitudes, perceptions and beliefs should be measured, the questions use the elements and dimensions of the concepts. For instance, 6 items were employed to measure the variable ‘organizational culture’.

The language of the questionnaire was selected based on the understanding level of the respondents. In this research, the Persian language was chosen as the medium language of respondents. The final English version of questionnaire was translated to Persian language in a back to back translation process. This was done to ensure that the process of translation will be consistent and two Persian and English versions of questionnaire were as similar as possible. This is very important, because some of the respondents might be from multinational companies with an English medium language.

Form of questions refers to positively and negatively worded questions. A small number of the questions are stated in the negative form, instead of wording them positively. This is done to decrease the propensity of respondents to automatically select one end of scale and to verify the reliability of responses. For example:

ERP Project Management

Strongly

Disagree

Moderately Disagree

Slightly

Disagree

Neither Agree Nor Disagree

Slightly

Agree

Moderately Agree

Strongly

Agree

There was not a formal management process to monitor the ERP vendor activities.

Type of questions is whether it is closed or open. In this research, all questions have been organized in the type of closed questions. There is just one open ended question which will be explained in section 4.5.2. For instance, to measure the dependent, moderator and independent variables, 7-ponits Likert scale was utilized.

ERP System Quality

Strongly

Disagree

Moderately Disagree

Slightly

Disagree

Neither Agree Nor Disagree

Slightly

Agree

Moderately Agree

Strongly

Agree

The ERP system provides accurate output information.

Demographic questions are known as classification data or personal information. Such data like age, gender, educational level, and number of years in the organization were included in the questionnaire to describe the characteristics of respondents later. The policy of this research was not to ask for the name of respondent. Furthermore, a set of alternatives was given to respondents to choose for gathering the demographics data. For instance:

Please indicate your level of education:

Undergraduate

Graduate

Postgraduate (MS)

Postgraduate (PhD)

4.5.2. Principals of Appearance of the questionnaire

It is very important to pay attention to how the questionnaire appears. A neat and attractive questionnaire with an appropriate introduction and a well dressed series of questions and answers will make the task easier for respondents.

A good introduction has been provided to clearly reveal the identity of the researcher, in order to communicate the intention of the survey and to ensure the confidentiality of information presented by respondents. This introduction provides less biased responses by respondents. In addition, the introduction has been completed on a courteous note; thank the respondent for taking the time to respond to the questionnaire.

The questions were organized in a logical and orderly in the appropriate section. Instructions were provided on how to respond to articles in each section to help participants answer them without difficulty and with minimal time and effort.

The questions were organized efficiently and reasonably in appropriate section. In addition, instructions were provided on how to respond the items in each section to help the participants answer them without trouble and with minimal time and effort. For example:

‘In this section, please indicate the extent to which you agree with the following statements by marking an “X” against the appropriate scale shown.’

Sometimes, people become irritated by the private nature of the questions. So, in this research, such questions were organized in categories like ordinal scaling format. For example:

Please indicate your age :

Below 30

31-40

41-50

Above 50

The questionnaire concluded with honest thanks for respondents. Moreover, the survey completed on a polite note, reminding the participant to verify that all questions have been answered. Finally, the questionnaire ended with an open question, inviting respondents to comment on subjects that may not have been adequately or completely covered.

4.6. Validity and Reliability Assessment of Questionnaire

The validity and reliability of developed questionnaire was evaluated to make sure that collected data are suitable to test the research hypotheses. These evaluations referred to the scales and scaling methods employed to measure variables and assess the validity and reliability of the measures used.

4.6.1. Scales and Scaling Techniques

The final outcome of operationalization process is a variable that can be measured. The following step is to use measurement scales that are appropriate to measure diverse variables. A measurement scale is a device or instrument by which respondents are differentiated on how they vary from one another on the variable of interest to this research. There are four types of measurement scale including nominal, ordinal, interval, and ratio. The level of sophistication to which the scales are fine tuned gradually increases as researchers shift from the nominal to the ratio scale. In other words, information on variables can be achieved with a greater extent of detail when researchers use a ratio or interval scale rather than the other two scales. More sophisticated data analysis can be carried out with more powerful scales, which means that more meaningful answers can be found to the research questions (Cavana et al., 2001). In this research, Likert scale was utilized to examine how strongly respondents agree or disagree with statement on a seven-point scale with the following anchors:

Strongly

Disagree

Moderately Disagree

Slightly

Disagree

Neither Agree nor Disagree

Slightly

Agree

Moderately Agree

Strongly

Agree

1

2

3

4

5

6

7

4.6.2. Assessment of Questionnaire Validity

Cavana et al. (2001) stated that “content validity relates to the representativeness or sampling adequacy of the questionnaire regarding the content or the theoretical construct to be measured”. Content validity of questionnaire was examined through three following steps as recommended by Cavana et al. (2001). First, the origins or history of each of items were reported. All questionnaire items were used and verified by prior researchers. But due to using the combination of these items, additional validity assessment was needed which are described in following paragraphs.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Second, a further test of content validity was conducted by sending the questionnaire to a group of ERP experts. The ERP experts examined all the elements of the questionnaire and made judgments about whether each item is measuring the theoretical construct proposed. Another name for this method is ‘expert judgment validity’. From literature review, 28 well-known ERP researchers who published frequently in prominent IS journals were chosen. These authors were from diverse countries such as USA, UK, Australia, Canada, France, Italy, Netherlands, China, Malaysia, Taiwan, South Korea, Egypt, Saudi Arabia, and Turkey. A set of problem statement, research objectives, research questions, research framework and questionnaire was sent to these 28 ERP researchers via e-mails. Five of the ERP researchers sent back an e-mail and all confirmed the research framework and questionnaire set (Professor Hooshang M. Beheshti, Faculty of Business and Economics, Radford University, USA; Professor Ike C. Ehie, Faculty of Business Administration, Kansas State University, USA; Professor Jahangir Karimi, School of Business, University of Colorado, USA; Professor John Ward, School of Management, Cranfield University, Bedford, UK; and Professor Valerie Botta-Genoulaz, Faculty of Information Technology, National Institute of Applied Sciences of Lyon, France.).

Third, the English questionnaire was translated to the Persian language which was the medium language of respondents. A professor in IT/Management who graduated from the USA was asked to translate the validated English version of questionnaire to Persian language. Then, the Persian questionnaire was given to six expert involved in ERP implementation projects in Iran. These ERP experts were the best of ERP consultants, vendors’ representatives and ERP project managers. They were asked to review the questionnaire separately and let the researcher know any changes needed. Based on the suggestions of ERP experts, 32 changes were made in wording and format of questionnaire. In addition, five items were removed from the questionnaire and one item added to demographic data. Finally, the modified Persian questionnaire was given to a different IT/Management professor who graduated from the USA as well and he was wanted to translate it back to English. This was done to ensure that the process of translation was consistent and Persian and English versions of questionnaire were as similar as possible. This was very important, because some of the respondents were from multinational companies with an English medium language. Table (4.10) summarizes the number of changes made to questionnaire in process of content validity assessment.

Table (4.10) Changes Made to Questionnaire in Content Validity Assessment

No.

Subject

Initial Items

Items

Dropped

Items

Added

Items

Edited

Final Items

1

Demographic Data

7

1

1

8

2

Enterprise-Wide Communication

6

3

6

3

Business Processes Reengineering

6

1

2

5

4

Project Management

7

1

4

6

5

Team Composition and Competence

6

1

2

5

6

ERP System Quality

6

1

3

5

7

ERP Vendor Support

6

4

6

8

Organizational Culture

6

3

6

9

ERP User Satisfaction

7

5

7

10

ERP Organizational Impact

8

1

5

7

Total

65

5

1

32

61

4.6.3. Assessment of Questionnaire Reliability

For examining the reliability of the questionnaire, a pilot study was carried out. The finalized version of questionnaire was distributed among 54 ERP users (operational/ functional/ unit managers). If a considerable number of respondents were employed in the pilot study, then very few respondents left to collect data in the main data collection stage. After one month, 37 completed questionnaires were collected. The data were inserted to SPSS software 16.0.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our academic writing services

Cronbach’s alpha was used to indicate the extent to which a set of questions can be considered for measuring a particular variable. Cronbach’s alpha usually increases when the correlations between the questions increase. So, the elements of each variable must be strongly correlated to have higher internal consistency of the test. As can be seen in Table (4.11), the results confirmed that all variables had high rate of cronbach’s alpha (above the 0.7). So, the questionnaire was considered as reliable as suggested by Hair et al. (2006).

Table (4.11) Reliability Assessment of Variables

No.

Construct

Cronbach (α)

1

Enterprise-Wide Communication

0.785

2

Business Processes Reengineering

0.842

3

Project Management

0.913

4

Team Composition and Competence

0.764

5

ERP System Quality

0.722

6

ERP Vendor Support

0.851

7

Organizational Culture

0.748

8

ERP User Satisfaction

0.869

9

ERP Organizational Impact

0.924

Based on prior researches findings and preceding validity and reliability assessment, a comprehensive questionnaire was designed. The questionnaire set was consisted of five following parts and can be seen in appendix (C):

A cover letter, which introduces the researcher and research objectives.

A confirmation letter from the Faculty of Business and Accountancy, University of Malaya, Malaysia.

Demographic data of respondents.

Close-ended questions relating to variables measurement items.

An open question to have more comments and suggestions of respondents.

4.7. Distributing Questionnaire

After identification of target population, the researcher discussed with the ERP project managers or chief information officers (CIO) of ERP user companies. The identity of the researcher disclosed and the purpose of the survey was clearly described. They were also asked to identify a liaison person. Subsequently, in several companies a meeting with the liaison person was arranged to describe the method of distributing, completing and also collecting the completed questionnaires. For the remaining companies, the liaison person was negotiated via telephone. The liaison persons were also asked to indicate the number of the operational /functional /unit managers who use ERP systems in their companies. Five hundred and sixty two were identified.

After confirming the number for each company, the questionnaires were distributed. The Persian version questionnaires were distributed to all companies, except two companies. The liaison persons were informed that they have to collect and sent the completed questionnaires to the researcher within one month. During the data collection period, more than 50 calls were received from the liaison persons to seek clarifications. On average, three rounds of follow up were carried out using telephone and email. After constant reminder, 411 completed questionnaires (73%) were collected between January to April 2009.

4.8. Data Analysis Techniques used

4.8.1. Overview of Structural Equation Modeling

Structural Equation Modeling (SEM) is widely recognized as a powerful methodology for capturing and explicating complex multivariate relations in social science data. SEM is considered as a second generation data analysis technique. It is a hybrid technique including aspects of confirmatory factor analysis, path analysis and regression. Most of the first generation tools such as linear regression can analyze only one level of relationship between independent and dependent variables at a time. However, SEM is able to answer a set of interrelated research questions in a single, systematic, and comprehensive analysis by modeling the relationships among multiple independent and dependent constructs simultaneously (Gefen et al., 2000). SEM offers several advantages compared to the more commonly used statistical methods of multiple regression and path analysis. SEM takes into account error variances associated with multi-item constructs. It allows the researcher to consider many relationships within a single analysis. It provides the ability for testing overall models rather than coefficients individually. It has the ability to test models with multiple dependents. Finally, it provides the researcher with several measures to assess model fit (Kline, 2005).

There are two primary methods of SEM analysis: covariance analysis and partial least squares. AMOS, LISREL, and EQS are representative software using covariance analysis, while PLS is the software employing partial least squares. These two different types of SEM vary in the objectives of their analyses, their statistical assumptions, and the nature of the fit statistics they produce. Table (4.12) summarizes the comparison of SEM methods and linear regression. Covariance based SEM techniques are best suited for confirmatory research like theory testing. However, PLS approach differs from covariance based SEM and it is more suitable for predictive applications and theory building. Since, PLS is considered a limited-information approach, parameter estimates are not considered to be as efficient as the full-information estimates provided by covariance based SEM. Unlike covariance based SEM, PLS has no overall test for model fit (Gefen et al., 2000). For these reasons, this study employs the covariance based SEM techniques.

In addition, one kind of covariance based SEM software, AMOSâ„¢ 16.0, is employed in this study because of its compatibility with SPSS® software and its graphical interface. The most significant feature of AMOSâ„¢ 16.0 is that the user can make a research model and test it using AMOS Graphics that does not need any particular programming language.

Table (4.12) Comparison between Statistical Techniques

(Source: Gefen et al., 2000)

In SEM, independent variables are called exogenous variables, while dependent variables are called endogenous variables. Observed variables are directly measured by researchers, while latent variables are not directly observed but are inferred by the relationships among measured variables in the model. SEM uses path diagrams which can represent relationships among observed and latent variables. Rectangles or squares represent observed variables, while ovals or circles represent latent variables. Residuals are always unobserved, so they are represented by ovals or circles. Bidirectional arrows represent correlations and covariances, which indicate relationships without an explicitly defined causal direction.

SEM involves the use of two types of analytical procedures run simultaneously to test and validate the model. The first type of analysis is confirmatory factor analysis (CFA). CFA attempts to determine the sets of observed variables that share common variance characteristics to define the factors (latent variables) or constructs for the model. Regression analysis is the second type of analysis, run simultaneously with CFA. Regression analysis validates the path model consisting of relationships between constructs (latent variables).

There is no single statistical test that best describes the strength of a model. Instead, researchers have developed a number of goodness-of-fit measures to assess the results from three perspectives: overall fit, comparative fit to a base model, and model parsimony. The AMOS software provides several such statistics that can be used to evaluate the hypothesized model and also propose ways in which the model might be modified given sufficient theoretical justification. Hair et al. (2006) suggested that using three to four fit indices provides adequate evidence of model fit. A researcher should report at least one incremental index and one absolute index, in addition to the Chi-square statistic (χ2) and associated degrees of freedom.

4.8.1.1. Chi-square statistic (χ2)

The model chi-square (χ2) fit statistic can be used to test the overall significance of the proposed model. The statistic is calculated as the difference between the actual sample covariance matrix (based on actual data collected from the sample) and the predicted covariance matrix. In AMOS, it is defined as CMIN and its smaller value is better. Small values of the chi-square statistic indicate small residuals and thus a relatively good fit. DF (df) is the number of degrees of freedom for testing the model. CMIN/DF is the minimum discrepancy divided by its degrees of freedom. As a rule of thumb, its desired level has been suggested as low as 3 as an acceptable fit (Hair et al., 2006).

4.8.1.2. Comparative Fit Index (CFI)

The Comparative Fit Index (CFI) is an incremental fit index. CFI compares the existing model fit with a null model which assumes that the latent variables in the model are uncorrelated. CFI ranges from 0 (no fit at all) to 1 (perfect fit). A commonly recommended value is 0.90 or greater (Hair et al., 2006).

4.8.1.3. Root Mean Square Error of Approximation (RMSEA)

The Root Mean Square Error of Approximation (RMSEA) corrects for model complexity by including degrees of freedom in the denominator. RMSEA is considered a descriptive measure of overall model fit and lower values indicate a better fit. Values less than 0.05 indicate good fit, values as high as 0.08 represent reasonable errors of approximation in the population, values range from 0.08 to 0.10 indicate mediocre fit, and those greater than 0.10 indicate poor fit (Hair et al., 2006).

4.8.2. Structural Equation Modeling Stages

The structural equation modeling was employed in this research using following steps as suggested by Hair et al. (2006):

4.8.2.1. Measurement Model Assessment

In this stage, each latent variable is modeled as a separate measurement model whereby the measurement model relates the observed variables to their respective latent variable. The measurement model is then validated by establishing that the observed variables are reasonable measures of each latent variable. The items are submitted to a measurement model analysis to check model fit indexes for each construct. Hair et al. (2006) suggested that a model reporting the chi-square (χ2) value and degrees of freedom, the CFI, and the RMSEA will often provide sufficient unique information to evaluate a model.

In measurement model assessment, some of initial model fit indexes may show poor fit; therefore, further model modification is arranged based on modification indexes. Modification index represents both measurement error correlations and item correlations (multicolinearity). High modification index represents error covariance meaning that one item might share variance explained with another item (commonality) and thus they are redundant. The remedial action for error covariance is to delete such an item which has high error variance as recommended by Hair et al. (2006).

Validation of the measurement model addresses both discriminant validity and convergent validity. However, further analysis is conducted to assess the psychometric properties of the scales (Schumacker & Lomax, 2004).

4.8.2.2. Discriminant Validity

Discriminant validity refers to the independence of the constructs or dimensions. Discriminant validity can be assessed using SEM methodology (Schumacker & Lomax, 2004). Important aspect of discriminant validity is the validation of second-order construct. Target coefficient (T) can be used to test for the existence of the single second-order construct that accounts for the variations in all its dimensions. The T coefficient is calculated as the following. Suppose that model (A) (Figure 4.2) represents four correlated first-order factors and model (B) (Figure 4.3) hypothesizes the same four first-order factors and a single second-order factor. T coefficient is the ratio of chi-square of model (A) to the chi-square of model (B) which indicates the percentage of variation in the four first-order factors in model (A) explained by the second-order factor in model (B). The T coefficient of 0.80 to 1.0 indicates the existence of a second-order construct since most of the variation shared by the first-order factors is explained by the single second-order factor (Hair et al., 2006).

Figure (4.2) Four Correlated First-Order Factors (Model A)

Figure (4.3) Second-Order Factor (Model B)

4.8.2.3. Convergent Validity

Convergent validity is defined as the extent to which the measurement items are converged into a theoretical construct. Convergent validity is assessed using three measures: factor loading, composite construct reliability, and average variance extracted (Hair et al., 2006). The factor loadings of the items in the measurement model should be greater than 0.70 and each item load significantly (p < 0.01) on its underlying construct. Next, the composite construct reliabilities should be within the commonly accepted range greater than 0.70. Lastly, the average variances extracted should be above the recommended level of 0.50 (Hair et al., 2006).

4.8.2.4. Confirmatory Factor Analysis

Confirmatory factor analysis (CFA) attempts to define the observed variables that have similar variance and covariance characteristics, defining constructs, latent variables, and factors. CFA is conducted to test the measurement model for all latent variables with their associated observed variables. The overall effectiveness of the measurement model is examined using common model fit measures: normed χ2, comparative fit index (CFI), and root mean square error of approximation (RMSEA) (Hair et al., 2006). If the fit measures meet the thresholds of a rational fitting model, thus signifying that the measurement model possesses an acceptable fit.

4.8.2.5. Structural Model Assessment

The last stage of the SEM process involves testing the structural model. SEM is designed to estimate the strength and direction of each hypothesized path as specified in the model. Provided the measurement model has both convergent and discriminant validity, the test of the structural model provides an assessment of the structural model in terms of nomological validity (Burnette & Williams, 2005). Nomological validity is the degree to which a construct behaves as it should within a system of related constructs. In this study, the proposed structural model is examined using general model fit measures: normed χ2, comparative fit index (CFI), and root mean square error of approximation (RMSEA). SEM fit indices measure the extent to which the covariance matrix derived from the hypothesized model is different from the covariance matrix derived from the sample. The maximum likelihood method is employed to estimate all parameters and fit indices (Hair et al., 2006).

4.8.2.6. Moderating Effect

According to Baron and Kenny (1986), a moderator variable affects the direction and/or strength of the relation between the independent and dependent variables. Figure (4.4) summarizes the properties of a moderator variable. Chin et al. (1996) provided a guideline for testing moderating (interaction) effects. In summary, each variable or indicator in the interaction is normalized or standardized by subtracting the mean from each indicator and dividing by its standard deviation. Next, multiplying the value of each of the constituting variables or indicators creates the interaction construct.

Figure (4.4): Moderator Model

(Source: Baron & Kenny, 1986)

Furthermore, changes in R-square can be examined to determine the effect size resulting from a model’s interactions (Cohen, 1988). Change in R-square is calculated by subtracting the R-square for the main effect model from the R-square of the interaction model. Furthermore, Cohen (1988) suggested effect size (Æ’2) for the interactions where the effect size of 0.371 or above is considered large, the effect size between 0.100 and 0.371 is considered medium, and the effect size of 0.1 or below is considered small.

(Source: Cohen, 1988)

 

Cite This Work

To export a reference to this article please select a referencing style below:

Give Yourself The Academic Edge Today

  • On-time delivery or your money back
  • A fully qualified writer in your subject
  • In-depth proofreading by our Quality Control Team
  • 100% confidentiality, the work is never re-sold or published
  • Standard 7-day amendment period
  • A paper written to the standard ordered
  • A detailed plagiarism report
  • A comprehensive quality report
Discover more about our
Essay Writing Service

Essay Writing
Service

AED558.00

Approximate costs for Undergraduate 2:2

1000 words

7 day delivery

Order An Essay Today

Delivered on-time or your money back

Reviews.io logo

1858 reviews

Get Academic Help Today!

Encrypted with a 256-bit secure payment provider