Web Survey Bibliography
Title Visual analogue scales in online surveys
Author Funke, F.
Year 2005
Access date 29.03.2005
Abstract The objective was to prove if Visual Analogue Scales (VAS) were applicable to online surveys. The study compares the response to 16 items on a four-point categorial scale, a eight-point categorial scale and a VAS with each other through offline (n=204) and online (http://www.web-experiment.nonfx.net{:} n=873) surveys.VAS, accepted as reliable and valid tools, consist of horizontal lines with a verbal anchor at each end. The respondent states his opinion by marking the line with a cross at some place between the extremes. The VAS values in the online survey were transmitted by JavaScript. By analyzing the logfile's data dropout, response times and nonresponse were compared for all three scales.To be able to compare the frequencies for each category of the categorial scales with those on the VAS, intervals on the VAS have to be broken into categories. Obviously, the appropriate procedure for this is to subdivide the line into intervals of the same length. This linear transformation leads in both designs to considerable divergences in the respondent's choice of categories compared to the choices made when categorial scales are used. Particularly extreme categories are chosen more frequently after linear VAS-transformation than they are chosen on categorial scales.As an alternative to linear transformation of the VAS the model of reduced extremes could be applied. In this model the length of the extreme categories amounts to only 2/3 of the other categories. Transformation based on this model results in a higher level of correspondence between VAS and the categorial scales for most categories but particularly for the extreme categories.This results in the following findings: In categorial scales extreme categories are perceived as being smaller than the other categories. The true value of the extreme categories is in fact higher than is represented on categorial scales. By using the model of reduced extremes the avoidance of extreme responses, known as the central tendency, can be quantified. As the relations of the tested online and offline design, as far as identifiable here, do hardly differ, in principle nothing seems to conflict with the use of VAS in online surveys.
Abstract - optional Die vorgestellte Untersuchung soll zeigen, ob sich Visuelle Analogskalen (VAS) für den Einsatz in Onlinebefragungen eignen. Es wird das Antwortverhalten auf je 16 Items beim Einsatz einer vierstufigen Kategorialskala, einer achtstufigen Kategorialskala und einer VAS in einer Offlineerhebung (n=204) und einer Onlineerhebung (http://www.web-experiment.nonfx.net{:} n=873) verglichen.VAS, als reliabel und valide anerkannte Messinstrumente, sind horizontale Linien mit verbalen Ankern an den Enden. Der Befragte kennzeichnet seine Einstellung durch Setzen eines Kreuzes zwischen den Extremen der Linien.Die Werte der VAS wurden bei der Onlineerhebung mittels JavaScript übertragen. Durch Analyse des Logfiles wurden Response-Zeiten, Nonresponse und Dropout für die drei Skalentypen ermittelt und miteinander verglichen.Um die Kategorienbelegungen der Kategorialskalen mit denen der VAS vergleichen zu können, müssen Intervalle auf der VAS zu Kategorien zusammengefasst werden. Das augenscheinlich angemessene Vorgehen ist die Unterteilung der Linie in gleichlange Intervalle. Diese lineare Transformation führt in beiden Designs zu großen Unterschieden in den Kategorienbelegungen im Vergleich zu den verwendeten Kategorialskalen. Vor allem die Belegung der Randkategorien ist nach linearer VAS-Transformation deutlich höher als bei den Kategorialskalen.Eine Alternative zu der linearen Transformation der VAS stellt das Modell der reduzierten Extreme dar. Hier beträgt die Breite der Intervalle der extremen Kategorien nur 2/3 der Breite der übrigen Kategorien. Eine Transformation nach diesem Modell ergibt für die meisten Kategorien, besonders aber für die Randkategorien, eine erhöhte Übereinstimmung zwischen VAS und Kategorialskala.Die Untersuchung kommt zu folgenden Ergebnissen: Extreme Kategorien werden bei Kategorialskalen als schmaler als die übrigen Kategorien wahrgenommen. Der wahre Wert der Belegungen der Randkategorien liegt über dem Wert, der beim Einsatz von Kategorialskalen angezeigt wird. Die Vermeidung der Belegung von Randkategorien, als Tendenz zur Mitte bekannt, lässt sich durch das Modell der reduzierten Extreme quantifizieren. Da sich die Beziehungen der untersuchten Skalen, soweit hier vergleichbar, in Online- und Offlinedesign kaum voneinander unterscheiden, steht dem Einsatz Visueller Analogskalen bei Onlinebefragungen grundsätzlich nichts entgegen.
Access/Direct link Homepage - conference - (abstract)
Year of publication2005
Bibliographic typeConferences, workshops, tutorials, presentations
Web survey bibliography (4086)
- Displaying Videos in Web Surveys: Implications for Complete Viewing and Survey Responses; 2017; Mendelson, J.; Lee Gibson, J.; Romano Bergstrom, J. C.
- Using experts’ consensus (the Delphi method) to evaluate weighting techniques in web surveys not...; 2017; Toepoel, V.; Emerson, H.
- Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer; 2017; Hagan, T. L.; Belcher, S. M.; Donovan, H. S.
- Answering Without Reading: IMCs and Strong Satisficing in Online Surveys; 2017; Anduiza, E.; Galais, C.
- Ideal and maximum length for a web survey; 2017; Revilla, M.; Ochoa, C.
- Social desirability bias in self-reported well-being measures: evidence from an online survey; 2017; Caputo, A.
- Web-Based Survey Methodology; 2017; Wright, K. B.
- Handbook of Research Methods in Health Social Sciences; 2017; Liamputtong, P.
- Lessons from recruitment to an internet based survey for Degenerative Cervical Myelopathy: merits of...; 2017; Davies, B.; Kotter, M. R.
- Web Survey Gamification - Increasing Data Quality in Web Surveys by Using Game Design Elements; 2017; Schacht, S.; Keusch, F.; Bergmann, N.; Morana, S.
- Effects of sampling procedure on data quality in a web survey; 2017; Rimac, I.; Ogresta, J.
- Comparability of web and telephone surveys for the measurement of subjective well-being; 2017; Sarracino, F.; Riillo, C. F. A.; Mikucka, M.
- Achieving Strong Privacy in Online Survey; 2017; Zhou, Yo.; Zhou, Yi.; Chen, S.; Wu, S. S.
- A Meta-Analysis of the Effects of Incentives on Response Rate in Online Survey Studies; 2017; Mohammad Asire, A.
- Telephone versus Online Survey Modes for Election Studies: Comparing Canadian Public Opinion and Vote...; 2017; Breton, C.; Cutler, F.; Lachance, S.; Mierke-Zatwarnicki, A.
- Examining Factors Impacting Online Survey Response Ratesin Educational Research: Perceptions of Graduate...; 2017; Saleh, A.; Bista, K.
- Usability Testing for Survey Research; 2017; Geisen, E.; Romano Bergstrom, J. C.
- Paradata as an aide to questionnaire design: Improving quality and reducing burden; 2017; Timm, E.; Stewart, J.; Sidney, I.
- Fieldwork monitoring and managing with time-related paradata; 2017; Vandenplas, C.
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Interviewer Gender and Survey Responses: The Effects of Humanizing Cues Variations; 2017; Jablonski, W.; Krzewinska, A.; Grzeszkiewicz-Radulska, K.
- Millennials and emojis in Spain and Mexico.; 2017; Bosch Jover, O.; Revilla, M.
- Where, When, How and with What Do Panel Interviews Take Place and Is the Quality of Answers Affected...; 2017; Niebruegge, S.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance...; 2017; Wetzlehuetter, D.
- Do distractions during web survey completion affect data quality? Findings from a laboratory experiment...; 2017; Wenz, A.
- Predicting Breakoffs in Web Surveys; 2017; Mittereder, F.; West, B. T.
- Measuring Subjective Health and Life Satisfaction with U.S. Hispanics; 2017; Lee, S.; Davis, R.
- Humanizing Cues in Internet Surveys: Investigating Respondent Cognitive Processes; 2017; Jablonski, W.; Grzeszkiewicz-Radulska, K.; Krzewinska, A.
- A Comparison of Emerging Pretesting Methods for Evaluating “Modern” Surveys; 2017; Geisen, E., Murphy, J.
- The Effect of Respondent Commitment on Response Quality in Two Online Surveys; 2017; Cibelli Hibben, K.
- Pushing to web in the ISSP; 2017; Jonsdottir, G. A.; Dofradottir, A. G.; Einarsson, H. B.
- The 2016 Canadian Census: An Innovative Wave Collection Methodology to Maximize Self-Response and Internet...; 2017; Mathieu, P.
- Push2web or less is more? Experimental evidence from a mixed-mode population survey at the community...; 2017; Neumann, R.; Haeder, M.; Brust, O.; Dittrich, E.; von Hermanni, H.
- In search of best practices; 2017; Kappelhof, J. W. S.; Steijn, S.
- Redirected Inbound Call Sampling (RICS); A New Methodology ; 2017; Krotki, K.; Bobashev, G.; Levine, B.; Richards, S.
- An Empirical Process for Using Non-probability Survey for Inference; 2017; Tortora, R.; Iachan, R.
- The perils of non-probability sampling; 2017; Bethlehem, J.
- A Comparison of Two Nonprobability Samples with Probability Samples; 2017; Zack, E. S.; Kennedy, J. M.
- Rates, Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based...; 2017; Sebo, P.; Maisonneuve, H.; Cerutti, B.; Pascal Fournier, J.; Haller, D. M.
- Necessary but Insufficient: Why Measurement Invariance Tests Need Online Probing as a Complementary...; 2017; Meitinger, K.
- Nonresponse in Organizational Surveying: Attitudinal Distribution Form and Conditional Response Probabilities...; 2017; Kulas, J. T.; Robinson, D. H.; Kellar, D. Z.; Smith, J. A.
- Theory and Practice in Nonprobability Surveys: Parallels between Causal Inference and Survey Inference...; 2017; Mercer, A. W.; Kreuter, F.; Keeter, S.; Stuart, E. A.
- Is There a Future for Surveys; 2017; Miller, P. V.
- Reducing speeding in web surveys by providing immediate feedback; 2017; Conrad, F.; Tourangeau, R.; Couper, M. P.; Zhang, C.
- Social Desirability and Undesirability Effects on Survey Response latencies; 2017; Andersen, H.; Mayerl, J.
- A Working Example of How to Use Artificial Intelligence To Automate and Transform Surveys Into Customer...; 2017; Neve, S.
- A Case Study on Evaluating the Relevance of Some Rules for Writing Requirements through an Online Survey...; 2017; Warnier, M.; Condamines, A.
- Estimating the Impact of Measurement Differences Introduced by Efforts to Reach a Balanced Response...; 2017; Kappelhof, J. W. S.; De Leeuw, E. D.
- Targeted letters: Effects on sample composition and item non-response; 2017; Bianchi, A.; Biffignandi, S.