Scales constructed should be representative of the construct that it intends to measure. It is possible that something similar to the scale a person intends to create will already exist, so including those scale(s) and possible dependent variables in one's survey may increase
validity of one's scale. • Begin by generating at least ten items to represent each of the sub-scales. Administer the survey; the more representative and larger the sample, the more credibility one will have in the scales. • Review the
means and
standard deviations for the items, dropping any items with skewed means or very low variance. • Run an
exploratory factor analysis with
oblique rotation on items for the scales - it is important to differentiate them based on their loading on factors to create sub-scales that represents the construct. Request factors with
eigenvalues (for calculating eigenvalue for each factor square the factor loading's and sum down the columns) greater than 1. It is easier to group the items by targeted scales. The more distinct the other items, the better the chances the items will load better in one's own scale. • “Cleanly loaded items” are those items that
load at least .40 on one factor and more than .10 greater on that factor than on any others. Identify those in the factor pattern. • “Cross loaded items” are those that do not meet the above criterion. These are candidates to drop. • Identify factors with only a few items that do not represent clear concepts, these are “uninterpretable scales.” Also identify any factors with only one item. These factors and their items are candidates to drop. • Look at the candidates to drop and the factors to be dropped. Is there anything that needs to be retained because it is critical to one's
construct. For example, if a conceptually important item only cross loads on a factor to be dropped, it is good to keep it for the next round. • Drop the items, and run a
confirmatory factor analysis asking the program to give only the number of factors after dropping the uninterpretable and single-item ones. Go through the process again starting at Step 3. Here various test
reliability measures could also be taken. • Keep running through the process until one get “clean factors” (until all factors have cleanly loaded items). • Run the
Alpha in the statistical program with the aim of obtaining a .70 reliability score (internal consistency), and request the Alphas if each item is dropped. Any scales with insufficient Alphas should be dropped, and the process should be repeated from Step 3. Remember that Alphas are not proof of scale quality or content validity.
[Coefficient alpha=number of items2 x average correlation between different items/sum of all correlations in the
correlation matrix (including the diagonal values)
] • Run correlational or regressional statistics to ensure the validity of the scale. For better practices, keep the final factors and all loadings of yours and similar scales selected in the Appendix of the created scale. == Multi-Item and Single-Item Scales ==