File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Development and validation of the EDUcational Course Assessment TOOLkit (EDUCATOOL) – a 12-item questionnaire for evaluation of training and learning programmes

TitleDevelopment and validation of the EDUcational Course Assessment TOOLkit (EDUCATOOL) – a 12-item questionnaire for evaluation of training and learning programmes
Authors
Keywordscourse quality
educational programmes
Kirkpatrick model
learning effectiveness
training evaluation
Issue Date2023
Citation
Frontiers in Education, 2023, v. 8, article no. 1314584 How to Cite?
AbstractIntroduction: The instruments for evaluation of educational courses are often highly complex and specifically designed for a given type of training. Therefore, the aims of this study were to develop a simple and generic EDUcational Course Assessment TOOLkit (EDUCATOOL) and determine its measurement properties. Methods: The development of EDUCATOOL encompassed: (1) a literature review; (2) drafting the questionnaire through open discussions between three researchers; (3) Delphi survey with five content experts; and (4) consultations with 20 end-users. A subsequent validity and reliability study involved 152 university students who participated in a short educational course. Immediately after the course and a week later, the participants completed the EDUCATOOL post-course questionnaire. Six weeks after the course and a week later, they completed the EDUCATOOL follow-up questionnaire. To establish the convergent validity of EDUCATOOL, the participants also completed the “Questionnaire for Professional Training Evaluation.” Results: The EDUCATOOL questionnaires include 12 items grouped into the following evaluation components: (1) reaction; (2) learning; (3) behavioural intent (post-course)/behaviour (follow-up); and (4) expected outcomes (post-course)/results (follow-up). In confirmatory factor analyses, comparative fit index (CFI = 0.99 and 1.00), root mean square error of approximation (RMSEA = 0.05 and 0.03), and standardised root mean square residual (SRMR = 0.07 and 0.03) indicated adequate goodness of fit for the proposed factor structure of the EDUCATOOL questionnaires. The intraclass correlation coefficients (ICCs) for convergent validity of the post-course and follow-up questionnaires were 0.71 (95% confidence interval [CI]: 0.61, 0.78) and 0.86 (95% CI: 0.78, 0.91), respectively. The internal consistency reliability of the evaluation components expressed using Cronbach’s alpha ranged from 0.83 (95% CI: 0.78, 0.87) to 0.88 (95% CI: 0.84, 0.92) for the post-course questionnaire and from 0.95 (95% CI: 0.93, 0.96) to 0.97 (95% CI: 0.95, 0.98) for the follow-up questionnaire. The test–retest reliability ICCs for the overall evaluation scores of the post-course and follow-up questionnaires were 0.87 (95% CI: 0.78, 0.92) and 0.91 (95% CI: 0.85, 0.94), respectively. Conclusion: The EDUCATOOL questionnaires have adequate factorial validity, convergent validity, internal consistency, and test–retest reliability and they can be used to evaluate training and learning programmes.
Persistent Identifierhttp://hdl.handle.net/10722/356308
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorMatolić, Tena-
dc.contributor.authorJurakić, Danijel-
dc.contributor.authorGreblo Jurakić, Zrinka-
dc.contributor.authorMaršić, Tošo-
dc.contributor.authorPedišić, Željko-
dc.date.accessioned2025-05-27T07:22:07Z-
dc.date.available2025-05-27T07:22:07Z-
dc.date.issued2023-
dc.identifier.citationFrontiers in Education, 2023, v. 8, article no. 1314584-
dc.identifier.urihttp://hdl.handle.net/10722/356308-
dc.description.abstractIntroduction: The instruments for evaluation of educational courses are often highly complex and specifically designed for a given type of training. Therefore, the aims of this study were to develop a simple and generic EDUcational Course Assessment TOOLkit (EDUCATOOL) and determine its measurement properties. Methods: The development of EDUCATOOL encompassed: (1) a literature review; (2) drafting the questionnaire through open discussions between three researchers; (3) Delphi survey with five content experts; and (4) consultations with 20 end-users. A subsequent validity and reliability study involved 152 university students who participated in a short educational course. Immediately after the course and a week later, the participants completed the EDUCATOOL post-course questionnaire. Six weeks after the course and a week later, they completed the EDUCATOOL follow-up questionnaire. To establish the convergent validity of EDUCATOOL, the participants also completed the “Questionnaire for Professional Training Evaluation.” Results: The EDUCATOOL questionnaires include 12 items grouped into the following evaluation components: (1) reaction; (2) learning; (3) behavioural intent (post-course)/behaviour (follow-up); and (4) expected outcomes (post-course)/results (follow-up). In confirmatory factor analyses, comparative fit index (CFI = 0.99 and 1.00), root mean square error of approximation (RMSEA = 0.05 and 0.03), and standardised root mean square residual (SRMR = 0.07 and 0.03) indicated adequate goodness of fit for the proposed factor structure of the EDUCATOOL questionnaires. The intraclass correlation coefficients (ICCs) for convergent validity of the post-course and follow-up questionnaires were 0.71 (95% confidence interval [CI]: 0.61, 0.78) and 0.86 (95% CI: 0.78, 0.91), respectively. The internal consistency reliability of the evaluation components expressed using Cronbach’s alpha ranged from 0.83 (95% CI: 0.78, 0.87) to 0.88 (95% CI: 0.84, 0.92) for the post-course questionnaire and from 0.95 (95% CI: 0.93, 0.96) to 0.97 (95% CI: 0.95, 0.98) for the follow-up questionnaire. The test–retest reliability ICCs for the overall evaluation scores of the post-course and follow-up questionnaires were 0.87 (95% CI: 0.78, 0.92) and 0.91 (95% CI: 0.85, 0.94), respectively. Conclusion: The EDUCATOOL questionnaires have adequate factorial validity, convergent validity, internal consistency, and test–retest reliability and they can be used to evaluate training and learning programmes.-
dc.languageeng-
dc.relation.ispartofFrontiers in Education-
dc.subjectcourse quality-
dc.subjecteducational programmes-
dc.subjectKirkpatrick model-
dc.subjectlearning effectiveness-
dc.subjecttraining evaluation-
dc.titleDevelopment and validation of the EDUcational Course Assessment TOOLkit (EDUCATOOL) – a 12-item questionnaire for evaluation of training and learning programmes-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.3389/feduc.2023.1314584-
dc.identifier.scopuseid_2-s2.0-85180862174-
dc.identifier.volume8-
dc.identifier.spagearticle no. 1314584-
dc.identifier.epagearticle no. 1314584-
dc.identifier.eissn2504-284X-
dc.identifier.isiWOS:001132993200001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats