فصلنامه ایرانی آموزش و پرورش تطبیقی

فصلنامه ایرانی آموزش و پرورش تطبیقی

استانداردهای آموزشی الکترونیکی: مطالعه تطبیقیِ قاره‌ها

نوع مقاله : Original Article

نویسندگان
استادیار ، پژوهشکده تحقیق و توسعه علوم انسانی (سمت)، تهران ، ایران
چکیده
تعیین استانداردها برای اطمینان از کیفیت در آموزش برخط چه در سطح بین‌المللی و چه در سطح بومی مدت‌ها است که مورد تاکید قرار گرفته است. به همین دلیل، مقایسۀ استانداردهای آموزش الکترونیکی مناطق مختلف جهان می‌تواند به استانداردسازی آموزش الکترونیکی در سطح منطقه‌ای و بین‌المللی کمک ‌کند. لذا هدف پژوهش تطبیقی حاضر بررسی دستورالعمل‌های موجود در شش منطقه جغرافیائی آفریقا، آمریکا، شرق آسیا ، غرب آسیا، اروپا و اقیانوسیه است تا مجموعه‌ای جامع از استانداردهای یادگیری الکترونیکی را تدوین و اشتراکات و تفاوت‌های احتمالی آنها را برجسته سازد. در مرحله نخست، استانداردهای منتخب تحلیل محتوا و 103 استاندارد با 204 بار اشاره ، شناسایی شدند. سپس این استانداردها در شش مقوله 1) طراحی و ساختار دوره، 2) پشتیبانی و مدیریت دانشجو، 3) محتوا و ارائه، 4) تکالیف و فعالیت‌های یادگیری، 5) ارزیابی و بازخورد، و 6) استفاده از رسانه و فناوری گروه بندی شدند. نتایج مرحلۀ کمی نشان‌دهندۀ توزیع متنوع استانداردها در مناطق مختلف جهان است. هم چنین نتایج حاکی از اجماع کلی در مورد اهمیت دسترسی‌پذیری مواد درسی و تعامل منظم یادگیرنده و مدرس است. در مقابل، در زمینۀ حمایت و مدیریت دانشجویان و استفاده از فناوری و ارائۀ مطالب در مناطق شش‌گانه منتخب ، تفاوت‌های قابل‌توجهی مشاهده شد. این یافته‌ها برای سیاست‌گزاران، مربیان و دست‌اندرکاران در هر دو مقیاس منطقه‌ای و جهانی دربردارنده تلویحاتی عملی است. 

تازه های تحقیق

-

کلیدواژه‌ها
dor -

موضوعات


  1. Introduction

                 E-learning has emerged as a revolutionary force in the field of education, transforming the way knowledge is imparted and absorbed. It leverages digital platforms to provide educational content outside the traditional classroom environment, making education more accessible and flexible (Rodrigues et al., 2019). E-learning encompasses various formats like online courses, virtual classrooms, and digital collaboration spaces, enabling learners to pursue education at their own pace and convenience (Khan, 2015). Amid the COVID-19 pandemic, e-learning has proved to be a lifeline, ensuring the continuity of education while maintaining social distancing norms (Purnamasari et al., 2020). As we move forward, the importance of e-learning in shaping the future of education continues to grow. It is not just an alternative anymore but an essential component of the global educational ecosystem (Dubey, et al., 2023).

            Setting standards has long been emphasized to ensure quality in online education both internationally and locally (Endean et al., 2010; Jethro et al., 2012; Shraim, 2020; Mazandarani et al., 2023). Course design standards, in particular, cover instructional and visual design, media, and assessment (Singh, 2019). These standards help developers to delineate the purpose, objectives, and strategies and select appropriate content, interaction and feedback, and assessment methods (Romiszowski, 2016). UNESCO is a noteworthy example in this regard. To ensure that e-learning courses are accessible and reusable across varied platforms, it helps its Member States take stock of lessons learned in other countries and regularly publishes work examining ICT in education policies in diverse contexts to spot best practices, emerging trends, and put forward suggestions (UNESCO, 2024).

              However, unlike technical standards that change and improve over time but not across regions (Friesen, 2005), course design standards might vary from one region to another to suit specific needs and maximize efficacy and efficiency (Ghuman et al., 2017). That is why some countries like South Africa (Council on Higher Education, 2014), Kenya (Université Virtuelle Africaine, 2014), Sweden (Åström, 2008), Mexico (Comités Interinstitucionales para la Evaluación de la Educación Superior, 2017), Canada (Barker, 2002), the U.S. (Powell, & Walsh, 2019), New Zealand (Marshall, 2006), Australia (Australasian Council on Online Distance and eLearning, 2014), and Brazil (INEP, 2017) have developed their own standards. The Australian ACODE Benchmarks for Technology Enhanced Learning, for instance, is a 55-page set of benchmarks aiming at assisting institutions in delivering a quality technology-enhanced learning experience for the end users. The included benchmarks cover various areas such as institution-wide policy and governance, planning for quality improvement, information technology systems and support, staff professional development, and student training and support (Åström, 2008). In another case, the National Standards for Quality Online Courses in the U.S. offers a number of standards for creating high-quality online courses and provides a framework for educational organizations to improve the courses. It covers different aspects such as course overview and support, content, instructional design, learner assessment, accessibility and usability, technology, and course evaluation (Powell & Walsh, 2019).

          In the same vein, the Institute of Standards and Industrial Research of Iran has attempted to provide a document whereby a variety of specifications are provided to guide the design, implementation, and management of electronic courses. In particular, the document specifies the roles to be taken up by the curriculum developers, materials developers, technical and pedagogical support personnel, instructors, and learners. Moreover, it puts forward the features of optimal content delivery systems, assessment procedures, and course structure (Institute of Standards and Industrial Research of Iran, 2008).

               Different countries may have unique approaches to e-learning that have proven effective. Comparing and contrasting e-learning standards across diverse regions, thus, is likely to result in identifying and sharing best practices and, subsequently, developing a comprehensive set of standards that ensures the quality and effectiveness of e-learning globally.  Several seminal studies have explored quality assurance practices and standards in the context of e-learning. Notably, Bibi et al. (2018) conducted a comparative analysis of quality assurance practices in open distance learning (ODL) universities. Their findings underscored the need for standardized approaches to ensure educational excellence. Along the same line, Pedro and Kumar (2020) highlighted the critical role of institutional support for online teaching within quality assurance frameworks. Furthermore, Weber et al. (2010) conducted a comprehensive comparison of quality assurance systems in higher education across different countries. Likewise, Inglis (2005) explored quality improvement, assurance, and benchmarking frameworks, emphasizing the multidimensional nature of quality processes in open and distance learning.

             In the context of Asia, Mohamed (2005) focused on the Arab region, emphasizing the need for tailored quality assurance frameworks. Zuhairi et al. (2020) investigated the implementation of quality assurance systems in Asian open universities, emphasizing the challenges faced by institutions in maintaining consistent quality. Additionally, Darojat (2013) conducted a comparative study of quality assurance practices in distance teaching universities in Thailand, Malaysia, and Indonesia.

            In Iran, a number of studies strived to arrive at such quality assurance frameworks (Ahangari et al., 2019; Barari et al., 2019; Niyazi et al., 2021; Ohani Zonouz, 2022; Pourkarimi & Alimardani, 2021; Taghaddomi & Mazandarani, 2023). Niyazi et al. (2021), for instance, employed the grounded theory approach to investigate the factors influencing the quality of Farhangian University online courses in the province of Khuzestan. In addition, Taghaddomi and Mazandarani (2023) conducted semi-structured interviews with educational experts in developing university textbooks to come up with the best practices in designing and developing asynchronous educational content. What was mentioned above indicates only a small percentage of the studies conducted on quality assurance in the context of online learning. However, taking a meticulous look at all these studies reveals that they were restricted to regional investigations and made no attempt to come up with a global understanding of what criteria could enlighten the path for optimal online course development. Furthermore, the literature seems to fall short of quantitative data while investigating the present matter of concern. Specifically, our study provides a comprehensive qualitative and quantitative continent-based comparison, identifies key standard categories, and reveals both commonalities and disparities. These findings have practical implications for policymakers, educators, and practitioners, offering guidance for context-specific guidelines and fostering cross-cultural collaboration. This, in turn, can offer invaluable information to local experts responsible for updating local standards, provide them with a broader perspective, inform their decision-making process, and help them develop standards that are in line with global best practices.

             In particular, the present study was an attempt to answer the following questions: 1) what is a comprehensive set of standards of e-learning standards based on the information gained from comparing and contrasting e-learning standards from different parts of the world? And 2) what are the commonalities and disparities among e-learning standards manuals from different parts of the world (Zawacki-Richter & Qayyum, 2019). 

 

  1. Research Method

 

        The present study adopted a sequential explanatory mixed-method design to analyze the existing standards manuals across different continents in order to develop a comprehensive set of e-learning standards and highlight the possible commonalities and disparities between and among the standards manuals. Since we attempted to compare standards at the highest level and pave the way for a comprehensive model, the comparison level was set at the continental scale (Manzon, 2014).

 

 

Corpus Identification and Selection

             A rigorous account of how we compiled the dataset will be delineated to guarantee transparency of the corpus selection process. To carry out a comprehensive inquiry, we strived to classify the search terms into three categories, resulting in 84 alternatives. Category #1 included seven items, namely standard, quality, framework, guideline, best practice, benchmark, and criteria; category #2 was comprised of four items such as online, distant, electronic, and virtual; and category #3 was made up of three items, including learning, teaching, and education. The search terms were then employed to pinpoint any relevant study published in peer-reviewed journals from 2000 to 2024. To this end, the most comprehensive International online databases, namely Web of Science, SCOPUS, and Google Scholar, were included in the search process.  Having accumulated the primary set of articles, we made an attempt to analyze the article’s reference sections to track any possible standards manuals, which ultimately led to the identification of 36 standards manuals from different parts of the world, some from the continents and others from particular countries.

          To ensure manageability and facilitate comparison, we decided to choose one prototypical manual to represent each continent. The criteria for choosing one manual over the others where there was more than one manual available for each continent were geographical coverage, recency, number of citations, and predominance of the pedagogic aspect. We also decided to include two standards for Asia, one from the East (Indonesia) and the other from the West (Iran) due to the fact that these regions have proved to be noticeably different from each other in terms of educational systems (?). In Table 1, the specifications of the six e-learning standards manuals are provided.

 

Table 1

Specifications of the E-Learning Standards Manuals

 

Title

Organization/Author

Year

Continent (Country)

1

AVU Quality Assurance (QA) Framework for Open, Distance and eLearning Programmes

Université Virtuelle Africaine

2014

Africa (Kenya)

2

National standards for quality online courses (third edition)

Powell, A., & Walsh, S.

2019

America (the US)

3

Quality Assurance Framework

Asian Association of Open Universities

2010

East Asia (Indonesia)

4

E-learning-Specifications

the Institute of Standards and Industrial Research

2008

West Asia (Iran)

5

Quality Assessment for e-Learning: a Benchmarking Approach (third edition)

European Association of Distance Teaching Universities

2016

Europe (Netherlands)

6

E-Learning Maturity Model Version Two: New Zealand Tertiary Institution E-Learning Capability: Informing and Guiding E-Learning Architectural Change and Development

Marshal, S.

2006

Oceania (New Zealand)

 

Data Collection and Analysis

            First, the selected standards manuals were carefully reviewed, and the main categories were determined using the content analysis method. The data analysis encompassed the tabulation of 204 mentions of 103 standard items gathered from six distinct regions (Africa, America, East Asia, West Asia, Europe, and Oceania) across six categories of e-learning standards. In evaluating regional disparities, a one-sample chi-square test was employed to compare the observed and expected frequencies of the general categories, while detailed standards were examined within each category using the Fisher-Freeman-Halton exact test. The bivariate chi-square test investigated associations between categories and regions, incorporating a Monte Carlo method for increased precision.  Effect sizes were computed using Cramer's V for detailed standards and the Phi test for the general categories to gauge the strength of associations. Following the guidelines provided by Lee (2016), the interpreted effect sizes ranged from negligible (0.00–0.10) to weak (0.10 to 0.20), moderate (0.20 to 0.40), relatively strong (0.40 to 0.60), strong (0.60 to 0.80), to very strong (0.80–1.00). This comprehensive analytical approach facilitated a nuanced exploration of the distribution and relationships within the e-learning standards dataset across diverse regions.

 

  1. Findings

 

            The purpose of this study was to compare e-learning standards among six regions: Africa, America, East Asia, West Asia, Europe, and Oceania. We found 204 standard items in the six selected standards manuals. Using content analysis, we categorized the standard items into six standard categories: (1) Course Design and Structure refers to the way the course content is organized, sequenced, and delivered to achieve the desired learning outcomes. (2) Student Support and Administration covers various aspects of academic and non-academic assistance, such as information, orientation, and guidance. (3) Materials and Presentation refers to the quality, diversity, and accessibility of the course content, as well as the design and delivery of the course materials. (4) Learning Tasks and Activities refers to the design and implementation of various learning experiences that engage learners in active and meaningful learning. It covers aspects such as learner autonomy, engagement, interaction, and self-directed learning. (5) Assessment and Feedback covers the ways of measuring and providing feedback on the learners’ performance and progress, such as tests, surveys, rubrics, and analytics. (6) Media and Technology Use covers the use and integration of various media and technology tools that enhance the eLearning experience, such as multimedia and interactivity. The data were analyzed using chi-square tests and effect size measures.

 

Table 2

Frequency of Standards in each Category and Region

Category

Africa

America

East Asia

West Asia

Europe

Oceania

Total

Course Design and Structure

11

7

3

7

10

3

41

Student Support and Administration

9

7

6

2

14

6

44

Materials and Presentation

3

6

2

10

7

2

30

Learning Tasks and Activities

9

10

4

9

10

4

46

Assessment and Feedback

4

5

7

4

9

2

31

Media and Technology Use

0

2

1

6

3

0

12

Total

36

37

23

38

53

17

204

 

             The first step of the quantitative data analysis was to examine whether there were any differences in the frequency of mentioning each of the general categories of the standards (e.g., Course Design and Structure) across the six regions. A one-sample chi-square test was performed to test whether the distribution of standards in different regions differed from the expected frequency. The results showed that the distribution was significantly different from the expected frequency, X2(5, N = 204) = 23.53, p < .001, phi = 0.34, indicating that some regions (e.g., Europe) had more references to the pedagogical standards than others (e.g., East Asia and Oceania). A chi-square test of independence was then performed for each category using the Monte Carlo method. The results showed that none of the categories had a significant association with the regions (P > .05), indicating that the overall distribution of the standards was similar across the regions. The effect size, measured by Cramer’s V, was 0.17, indicating small associations between the categories and the regions. The frequency of each category by region is presented in Table 2 and Figure 1.

 

Figure 1

Bar Chart Showing the Frequency of Standard Categories across Regions

 

Table 3

Comparing Components of Course Design and Structure Standards across the Six Regions

Code

Course Design and Structure Standards

Africa

America

East Asia

West Asia

Europe

Oceania

Total

1-1

Offering Course Overview and Syllabus

3

1-2

Setting Clear Course Objectives and Competencies

5

1-3

Providing Layout and Presentation Guidelines

1

1-4

Scheduling Course Presentations Flexibly

2

1-5

Giving Instructor Control over Content Release

1

1-6

Providing Multiple and Flexible Learning Paths

4

1-7

Targeting Transferable Educational Skills

1

1-8

Offering Modular Structure and Courses

1

1-9

Aligning Course Objectives, Delivery, and Assessment

5

1-10

Aligning E-learning Media and Course Objectives

3

1-11

Limiting Session Duration

1

1-12

Segmenting Units

1

1-13

Using Assessment Outcomes to Inform Curriculum Improvements

1

1-14

Offering Student Research Opportunities and Skills

1

1-15

Sequencing Units and Lessons Logically

4

1-16

Providing Flexible Entry and Exit Points

1

1-17

Catering for the Reading Level of the Intended Learners

2

1-18

Designing Courses to Support Disabled Students and Specific Learning Difficulties

3

1-19

Paying Attention to Gender Equity, Multiculturalism, Social Justice, Ethical Values, and Environmental Sustainability

1

 

Total

11

7

3

7

10

3

41

 

          Table 3 shows the presence or absence of 19 standards related to course design and structure in the regions. For example, the table reveals setting clear course objectives and competencies (1-2) and aligning course objectives, delivery, and assessment (1-9) are common standards across regions. On the other hand, limiting session duration (1-11) and segmenting units (1-12) are unique standards for West Asia. The table also shows that Africa has the most standards (11 out of 19), followed by Europe (10 out of 19), while East Asia and Oceania have the least standards (3 out of 19). A Fisher-Freeman-Halton exact test was performed to test whether the frequency distribution of the standards across the regions was significantly different from the expected distribution. The results showed that there was no significant difference (p > 0.05). However, the effect size, measured by Cramer’s V, was 0.55, indicating a relatively strong association between the regions and the standards.

 

Table 4

Comparing Components of Student Support and Administration Standards across the Six Regions

Code

Student Support and Administration Standards

Africa

America

East Asia

West Asia

Europe

Oceania

Total

2-1

Stating Entry-Level Requirements

2

2-2

Sharing Understanding of the Course Rationale

1

2-3

Stating Expectations of Learners

4

2-4

Stating Minimum Technology Requirements

3

2-5

Providing Logical, Consistent, and Efficient Navigation

1

2-6

Giving Orientation Information

5

2-7

Stating Minimum Computer and Digital Literacy Skills

1

2-8

Setting Realistic Course Entry Requirements

1

2-9

Providing Support Services

1

2-10

Providing Advice and Guidance on Study Skills

3

2-11

Specifying and Clarifying Online Student Group Management

2

2-12

Informing Participants about Netiquette and Codes of Behavior

1

2-13

Specifying Timetables and Deadlines for Student Work

1

2-14

Providing Access to Support Services for Personal and Learning Issues

3

2-15

Communicating Pedagogical Rationale for E-learning Approaches and Technologies

1

2-16

Compiling an Integrated Database about Learners

1

2-17

Teaching Digital Literacy

3

2-18

Providing Varied Opportunities for Self-monitoring and Reflection

1

2-19

Offering an E-portfolio Service

2

2-20

Using Learners’ Feedback to Improve Course Design

3

2-21

Having a Built-in Survey System

4

 

Total

9

7

6

2

14

6

44

 

        

     Table 4 shows the presence or absence of 21 standards related to student support and administration in the six regions. For example, the table reveals that giving orientation information (2-6) is a common standard across all regions except Oceania. On the other hand, providing logical, consistent, and efficient navigation (2-5) and stating minimum computer and digital literacy skills (2-7) are unique standards for America. The table also shows that Europe has the most standards (14 out of 21), followed by Africa (9 out of 21), while West Asia has the least standards (2 out of 21). A Fisher-Freeman-Halton exact test was performed to test whether the frequency distribution of the standards across the regions was significantly different from the expected distribution. The results showed that there was no significant difference (p > 0.05). However, the effect size, measured by Cramer’s V, was 0.59, indicating a relatively strong association between the regions and the standards.

 

Table 5

Comparing Components of Materials and Presentation across the Six Regions

 

 

 

Code

Materials and Presentation Standards

Africa

America

East Asia

West Asia

Europe

Oceania

Total

3-1

Providing Aligned Supplemental Learning Resources

3

3-2

Offering Culturally Diverse and Bias-free Materials

1

3-3

Ensuring Accuracy and Currency of Course Materials

2

3-4

Improving Materials Based on Need Analysis Results

3

3-5

Avoiding Adult Content and Unnecessary Advertisements

1

3-6

Creating Engaging and Interactive Learning Materials

1

3-7

Providing Accessible Course Materials for Diverse Learners

6

3-8

Offering Self-paced Materials

1

3-9

Providing Learning Materials that Facilitate Self-Assessment

1

3-10

Providing Alternative Formats of Materials to Meet the Accessibility Needs of Individual Students

1

3-11

Providing Access to Online Libraries and Study Advisors

3

3-12

Delivering Content Concisely

1

3-13

Ensuring that the Teacher's Voice is Warm, Audible, Articulate, and Engaging

1

3-14

Summarizing Group Discussions and Notifying Learners

1

3-15

Providing Examples to Consolidate Understanding

1

3-16

Providing Closure for Topics

1

3-17

Activating Prior Knowledge

1

3-18

Asking Retention and Transfer Questions

1

 

Total

3

6

2

10

7

2

30

 

           Table 5 shows the presence or absence of 18 standards related to materials and presentation in the six regions. For example, the table reveals that providing accessible course materials for diverse learners (3-7) is a common standard across all regions. On the other hand, delivering content concisely (3-12) and ensuring that the teacher’s voice is warm, audible, articulate, and engaging (3-13) are unique standards for West Asia. The table also shows that West Asia has the most standards (10 out of 18), followed by America (7 out of 18), while East Asia and Oceania have the least standards (2 out of 18). A Fisher-Freeman-Halton exact test was performed to test whether the frequency distribution of the standards across the regions was significantly different from the expected distribution. The results showed that there was no significant difference (p > 0.05). However, the effect size, measured by Cramer’s V, was 0.61, indicating a strong association between the regions and the standards.

 

Table 6

 Comparing Components of Learning Tasks and Activities across the Six Regions

Code

Learning Tasks and Activities Standards

Africa

America

East Asia

West Asia

Europe

Oceania

Total

4-1

Promoting Learning Ownership and Self-monitoring through activities

3

4-2

Applying Readability Principles to Activities and Tasks

1

4-3

Including Assignments or Activities to Engage Learners

4

4-4

Providing Regular Opportunities for Learner-Learner Interaction

5

4-5

Providing Regular Opportunities for Learner-Instructor Interaction

6

4-6

Presenting Activities Effectively and Engagingly

1

4-7

Minimizing Distractions through Tasks and Activities

1

4-8

Connecting with Non-campus Professionals and Professions

1

4-9

Adapting Learning Activities to Learners’ Needs and Preferences

3

4-10

Providing Regular and Sufficient Access to Tutors

3

4-11

Offering Diversity in the Learning Experiences

1

4-12

Providing Social Media Opportunities to Support Student Communities

4

4-13

Promoting Formal and Informal Online Mentoring and Peer-to-Peer Help and Learning

2

4-14

Encouraging and Developing Creative and Critical Thinking, Independent and Lifelong Learning, and Interpersonal Communication and Team Work Skills

2

4-15

Allocating Team Projects

1

4-16

Offering Interactive Assignments

1

4-17

Providing Accessible Activities for Diverse Learners

6

4-18

Promoting Self-directed Learning in Online Environments

1

 

Total

9

10

4

9

10

4

46

 

          Table 6 shows the presence or absence of 18 standards related to learning tasks and activities in six regions. For example, the table reveals that providing regular opportunities for learner-instructor interaction (4-5) and providing accessible activities for diverse learners (4-17) are common standards across all regions. On the other hand, allocating team projects (4-15) and offering interactive assignments (4-16) are unique standards for West Asia. The table also shows that Europe and America have the most standards (10 out of 18), while East Asia and Oceania have the least standards (4 out of 18). A Fisher-Freeman-Halton exact test was performed to test whether the frequency distribution of the standards across the regions was significantly different from the expected distribution. The results showed that there was no significant difference (p > 0.05). However, the effect size, measured by Cramer’s V, was 0.48, indicating a relatively strong association between the regions and the standards.

 

Table 7

 Comparing Components of Assessment and Feedback across Six Regions

Code

Assessment and Feedback Standards

Africa

America

East Asia

West Asia

Europe

Oceania

Total

5-1

Sharing Assessment Rubrics that Define Learner Expectations

2

5-2

Assessing Objectives or Competencies

1

5-3

Measuring Learner Progress through Formative Assessment

3

5-4

Providing Automatic Marking and Feedback for Formative Assessment

1

5-5

Offering a Quiz System with Increasing Difficulty

1

5-6

Balancing Formative and Summative Assessment

3

5-7

Using Multiple Methods and Sources for Course Evaluation

1

5-8

Applying Innovative Assessment Approaches

2

5-9

Providing Opportunities for Mastery Demonstration

1

5-10

Setting Criteria for Online Collaboration Assessment

1

5-11

Ensuring Flexibility of Media Scheduling and Use in Assessment

1

5-12

Monitoring Continuously or Cyclically for Improvement

2

5-13

Informing Learners about Assessment Conditions and Outcomes

1

5-14

Providing Relevant, Deep, and Timely Feedback

5

5-15

Providing Feedback via Asynchronous and Synchronous Online Tools

2

5-16

Ensuring Timeliness, Fairness, and Appropriateness of Assessment

2

5-17

Ensuring Validity and Reliability of Assessment Materials

1

5-18

Correcting and Giving Feedback on Learner Assignments

1

 

Total

4

5

7

4

9

2

31

 

 

         Table 7 compares 18 standards for assessment and feedback in e-learning courses across the six regions. For example, the table reveals that all regions except America mention the standard of providing relevant, deep, and timely feedback (5-14). However, only West Asia follows the standard of correcting and giving feedback on learner assignments (5-18). The table also indicates that Europe has the most standards (9 out of 18), while Oceania has the fewest standards (2 out of 18). A Fisher-Freeman-Halton exact test was performed to test whether the frequency distribution of the standards across the regions was significantly different from the expected distribution. The results showed that there was no significant difference (p > 0.05). However, the effect size, measured by Cramer’s V, was 0.66, indicating a strong association between the regions and the standards.

 

Table 8

Comparing Components of Media and Technology Use across the Six Regions

Code

Assessment and Feedback Standards

Africa

America

East Asia

West Asia

Europe

Oceania

Total

6-1

Maintaining a Consistent User Interface

1

6-2

Detecting Plagiarism and Collusion

1

6-3

Using Music Properly

1

6-4

Employing Various Media for Engagement and Learning

1

6-5

Using Format and Text Color Purposefully

1

6-6

Using Multimedia to Facilitate Accessibility

3

6-7

Balancing Synchronous and Asynchronous Environments

1

6-8

Providing Access to Recordings of Synchronous Sessions

2

6-9

Making Teacher Presentations Easily Accessible

1

 

Total

0

2

1

6

3

0

12

   

          Table 8 compares nine standards for media and technology use in e-learning courses across the six regions. For example, the table shows that only Europe follows the standard of maintaining a consistent user interface (6-1) and detecting plagiarism and collusion (6-2). However, only West Asia follows the standard of using music properly (6-3) and employing various media for engagement and learning (6-4). The table also indicates that West Asia has the most standards (6 out of 9), while standards manuals of Africa and Oceania do not mention this standard category. A Fisher-Freeman-Halton exact test was performed to test whether the frequency distribution of the standards across the regions was significantly different from the expected distribution. The results showed that there was no significant difference (p > 0.05). The effect size, measured by Cramer’s V, was 0.74, indicating a strong association between the regions and the standards of Media and Technology Use.

  1. Discussion

           This study analyzed e-learning standards across six regions: Africa, America, East Asia, West Asia, Europe, and Oceania. The goal was to compare pedagogical approaches and inform a more inclusive and universal e-learning standards manual. We conducted a content analysis of six standards manuals and identified 103 standard items with 204 references, grouped into six standard categories. The results showed a significant variation in the frequency of standard references across regions. Although most of the p-values did not reach the conventional significance (0.05), the moderate to strong effect sizes were noteworthy. This was due to the limited references to standards compared to the number of cells in the contingency tables. With small sample sizes, p-values may not reflect the practical significance or magnitude of effects (Dunkler et al., 2020). Therefore, our study focused on effect sizes as they offered more meaningful and substantive interpretations. The effect sizes indicated a varied distribution of standards across the regions. The synthesis of the findings highlighted the region-specific e-learning approaches, emphasizing both commonalities and distinctions that influenced the pedagogical standards across diverse contexts.

We found notable variations in the frequency and distribution of the pedagogical standards across the regions, indicating different levels of alignment and emphasis on pedagogical standards. For instance, Europe had the highest number of references to pedagogical standards (51.46%), while Oceania (16.50%) and East Asia (22.33%) had the lowest. These variations may reflect the diverse approaches and priorities of e-learning practices across different geographical contexts, which could be influenced by factors such as culture, policy, infrastructure, and resources (Kong et al., 2014; Zaharias, 20080. For example, the e-learning standards of East Asia may emphasize the technical constraints and the design and delivery of e-learning content rather than pedagogical aspects. In contrast, the emphasis on pedagogical aspects in e-learning manuals in Europe and America may reflect a stronger focus on instructional design and educational theory in these regions (Hillen & Landis, 2014). Our findings are consistent with previous studies and reports that have highlighted the regional disparities in e-learning standards and their implications for the quality and access of remote learning (Haderlein et al., 2021; Kennedy et al., 2022). These disparities could have significant impacts on the educational outcomes and experiences of students and teachers.

            The examination of e-learning standard items across the six regions unveils a complex interplay of commonalities and distinct regional emphases. On the commonalities side, the pervasive acknowledgment of the importance of providing accessible course materials for diverse learners transcends regional boundaries, signifying a global commitment to inclusivity (Gallagher & Knox, 2019). In addition, there was a consensus among all regions that the regular learner-instructor interaction enhances the learning environment and skill acquisition for students. This is in line with numerous studies showing that student-instructor interaction has positive effects on improving knowledge and thinking skills (Hussin, 2019) and contributes to learners' continuous intentions for online learning (Li et al., 2021).

              When it comes to regional differences, Europe emerged as a stronghold for standards related to student support and administration, evident in its significantly higher frequency compared to that of East Asia and Oceania. This divergence suggests varying regional priorities, perhaps influenced by cultural and educational frameworks (Zaharias, 2008). When delving into the domains of media and technology use and materials presentation, West Asia's unique standards, such as prioritizing music usage and emphasizing various media for engagement and learning, underscore a distinctive approach (Masoumi, 2010). This difference may be influenced by varying regional priorities and educational traditions. For example, a study on the challenges of the Iranian e-learning programs highlights the importance of content, technological infrastructure, and the need to establish an e-learning culture through websites and the media (Abbasi Kasani et al., 2020). On the other hand, the mentioning of consistent user interfaces and plagiarism detection in Europe, absent in other regions, may reflect a commitment of European e-learning experts to enhancing the user experience (de Lera et al., 2013) and ensuring academic integrity in e-learning (e.g., Foltynek et al., 2023). These granular variations in standards adoption highlight the need for region-specific e-learning strategies, acknowledging and addressing the particular educational landscape of each area (Kong et al., 2014; Singh & Reed, 2002). In addition, the reason why many item standards are unique in some areas may be that empirical evidence does not support the ubiquitous use of these item standards in all e-learning environments. For example, some studies have indicated that the use of music or various media in an online learning environment may be redundant and even do a disservice to the realization of the course objectives (de la Mora Velasco & Hirumi, 2020).

            In interpreting the regional variations in e-learning standards revealed in the results, the lens of cultural dimensions, particularly Hofstede's model and Hall's High-Context and Low-Context Cultures, can offer insightful perspectives. The observed emphasis on specific standards in Europe, such as aligning courses with gender equity and social justice, resonates with the lower power distance and higher individualism characteristic of the region, reflecting a commitment to egalitarian values and individual rights (Van Herk & Poortinga, 2012). Conversely, East Asia's preference for standards related to readability principles and specific learning difficulties might be influenced by a higher uncertainty avoidance culture, where clear guidelines and detailed instructions help mitigate ambiguity (Kemp, 2013). Furthermore, the standards related to assessment and feedback that was prevalent in Europe may be indicative of the region's commitment to a comprehensive evaluation of learning outcomes (Stanley, 2015).

5. Conclusion

         Our study not only sheds light on the existing disparities and commonalities between and among e-learning standards manuals across the globe but also calls for a collective effort to develop a comprehensive set that encapsulates the richness of the pedagogical standards from around the world, ultimately contributing to the advancement of global e-learning practices. Proposing a comprehensive set of e-learning standards emerges as a logical step forward based on our findings; given the observed disparities in the distribution of the standards across different continents, there is a compelling need to consolidate and harmonize e-learning practices globally. The identified 103 standard items, although diverse, provide a foundation that could serve as the basis for an inclusive and adaptable manual applicable across various educational landscapes. This proposed set of standards could encapsulate pedagogical standards from different regions, bridging the gaps and creating a unified framework that accounts for the unique needs and cultural contexts of diverse educational environments worldwide. Such a comprehensive set of e-learning standards has the potential to streamline curriculum development, enhance the quality of online education, and foster collaboration among educators globally. Its adaptability would enable its implementation in various cultural and regional settings and contribute to a more standardized and effective global e-learning landscape.

            Meanwhile, the practical implications of the findings bear significance for policymakers, educators, and e-learning practitioners on both regional and global scales. Policymakers can leverage this insight to tailor e-learning frameworks that align with the specific needs and preferences of their regions. For educators, understanding the regional emphasis on certain standards can inform instructional design, fostering pedagogical approaches that resonate with the prevailing cultural and regional contexts. In addition, e-learning practitioners stand to benefit by incorporating these insights into the development of learning platforms, ensuring that they are not only technologically robust but also culturally and pedagogically sensitive. Furthermore, these findings can contribute to the development of regional and international guidelines for e-learning standardization, facilitating a more cohesive and globally accepted approach to e-learning practices. By acknowledging and embracing the diversity in e-learning standards across regions, stakeholders can collectively work towards establishing a more inclusive and adaptable framework for online education.

             In the end, reflection on the strengths and limitations of our study is paramount to understanding the robustness and potential constraints of our findings. Methodologically, our comprehensive analysis of e-learning standards across six regions provides a rich and varied dataset, offering a nuanced understanding of global practices. The use of robust statistical tests enhanced the rigor of our study. Nevertheless, it is crucial to acknowledge the limitations inherent in the reliance on standards manual and the potential for subjectivity in their interpretation. The data's generalizability might be influenced by the limited set of standards manuals analyzed and the evolving nature of e-learning practices. Moreover, regional nuances and variations within each region may not be fully captured by our broad categorization. Recognizing these limitations, we suggest that future researchers employ a more extensive and diverse dataset, incorporating a qualitative dimension to capture the subtleties of regional e-learning practices and standards.

-

Abbasi Kasani, H., Shams Mourkani, G., Seraji, F., Rezaeizadeh, M., & Abedi, H. (2020). E-learning challenges in Iran: A research synthesis. International Review of Research in Open and Distributed Learning, 21(4), 96-116. https://doi.org/10.19173/irrodl.v21i4.4677
 
Ahangari M, Torkzadeh J, Mohammadi M, Marzoghi R, & Hashemi S. (2019). Identifying the Components of Evaluating the Internal Effectiveness for Academic E-courses: Qualitative Study. Journal of Iranian Higher Education, 11(1), 125-159. [In Persian]
 
Association of Asian Open Universities (AAOU). (2010). Quality assurance framework. Retrieved from http://www.aaou.net.
 
Åström, E. (2008). E-learning quality: Aspects and criteria for evaluation of e-learning in higher education. (Högskoleverkets rapportserie 2008: 11 R). Solna: Swedish National Agency for Higher Education.
 
Australasian Council on Online Distance and eLearning. (2014). The ACODE Benchmarks for Technology Enhanced Learning. Retrieved from: http://www.acode.edu.au/course/view.php?id=5
 
Barari, N., Alami, F., Rezaeizadah, M., & Khorasani, A. (2019). Evaluating the Goals of High Levels of Learning in E-Learning Environments (Standards & Indicators). Journal of Instruction and Evaluation12(45), 111-132. doi: 10.30495/jinev.2019.665920 [In Persian]
 
Barker, K. (2002). Canadian recommended e-learning guidelines. Vancouver, BC: FuturEd for Canadian Association for Community Education and Office of Learning Technologies. HRDC
 
Bibi, T., Rokhiyah, I. & Mutiara, D. (2018). Comparative study of quality assurance practices in open distance learning (ODL) universities. International Journal of Distance Education and E-Learning (IJDEEL), 4(1). 26-39, https://doi.org/10.36261/ijdeel.v4i1.478
 
Comités Interinstitucionales para la Evaluación de la Educación Superior. (2017). Principios y estándares para la evaluación y acreditación de programas educativos en instituciones de educación superior. Modalidad a distancia. Comités Interinstitucionales para la Evaluación de la Educación Superior. https://r.issu.edu.do/l?l=13133lT5 [In Spanish]
 
Council on Higher Education (CHE). 2014. Distance Higher Education Programmes in a Digital Era: Good Practice Guide. Pretoria: CHE.
 
Darojat, O. (2013). Quality assurance in distance teaching universities: A comparative study in Thailand, Malaysia, and Indonesia. Dissertation. Simon Fraser University. file:///C:/Users/UNIVER~1/AppData/Local/Temp/etd8024_ODarojat-1.pdf. Accessed 2 Mar 2024
 
De la Mora Velasco, E., & Hirumi, A. (2020). The effects of background music on learning: A systematic review of literature to guide future research and practice. Educational Technology Research and Development, 68, 2817-2837. https://doi.org/10.1007/s11423-020-09783-4
 
De Lera, E., Almirall, M., Valverde, L., Gisbert, M. (2013). Improving user experience in e-learning, the case of the Open University of Catalonia. In: Marcus, A. (ed.). Design, user experience, and usability. Health, learning, playing, cultural, and cross-cultural user experience. DUXU 2013. Lecture notes in computer science, 8013. Springer. https://doi.org/10.1007/978-3-642-39241-2_21
 
Dubey, P., Dubey, P., & Sahu, K. K. (2023). An Investigation on Remote Teaching Approaches and The Social Impact of Distance Education. Redefining Virtual Teaching-Learning Pedagogy, 275-293. https://doi.org/10.1002/9781119867647.ch15
 
Dunkler, D., Haller, M., Oberbauer, R., & Heinze, G. (2020). To test or to estimate? P-values versus effect sizes. Transplant international: official journal of the European Society for Organ Transplantation, 33(1), 50–55. https://doi.org/10.1111/tri.13535
 
Endean, M., Bai, B., & Du, R. (2010). Quality standards in online distance education. International Journal of Continuing Education & Lifelong Learning, 3(1). https://w5.hkuspace.hku.hk/journal/index.php/ijcel
 
Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). ENAI Recommendations on the Ethical Use of Artificial Intelligence in Education. International Journal for Educational Integrity, 19(1), 12. https://doi.org/10.1007/s40979-023-00133-4
 
Friesen, N. (2005). Interoperability and learning objects: An overview of e-learning standardization. Interdisciplinary Journal of E-Learning and Learning Objects, 1(1), 23-31. https://www.learntechlib.org/p/44864/.
 
Gallagher, M., & Knox, J. (2019). Global technologies, local practices. Learning, media and technology, 44(3), 225-234. https://doi.org/10.1080/17439884.2019.1640741
 
Ghuman, A., Mahajan, J., Bhatia, S., Singh, J., & Kulkarni, M. D. (2017). Empowering e-learning with localization. In 5th National Conference on E-Learning & E-Learning Technologies (ELELTECH) (pp. 1-6). IEEE.
 
Haderlein, S. K., Saavedra, A. R., Polikoff, M. S., Silver, D., Rapaport, A., & Garland, M. (2021). Disparities in Educational Access in the Time of COVID: Evidence from a Nationally Representative Panel of American Families. AERA Open, 7. https://doi.org/10.1177/23328584211041350
 
Hillen, S. A., & Landis, M. (2014). Two perspectives on e-learning design: A synopsis of a U.S. and a European analysis. International Review of Research in Open and Distance Learning, 15(4), 199–225. https://doi.org/10.19173/irrodl.v15i4.1783
 
Hussin, W., Harun, J., & Shukor, N. (2019). A Review on the Classification of Students' Interaction in Online Social Collaborative Problem-based Learning Environment: How Can We Enhance the Students' Online Interaction? Universal Journal of Educational Research, 7, 125 - 134 https://doi.org/10.13189/ujer.2019.071615
 
INEP, (2017). Instrumento de Avaliação Institucional Externa - Presencial e a distância - Recredenciamento Transformação de Organização Acadêmica (External institutional evaluation instrument - Classroom-based and distance - Re-accreditation and change of academic status), Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira, Brasília. [In Portuguese]
 
Inglis, A. (2005). Quality Improvement, Quality Assurance, and Benchmarking: Comparing two frameworks for managing quality processes in open and distance learning. International Review of Research in Open and Distributed Learning6(1), 1–13. https://doi.org/10.19173/irrodl.v6i1.221
 
Institute of Standards and Industrial Research of Iran (2008). E-learning-Specifications. http://www.isiri.org/portal/files/std/10000.pdf [In Persian]
 
Jethro, O. O., Grace, A. M., & Thomas, A. K. (2012). E-learning and its effects on teaching and learning in a global age. International Journal of Academic Research in Business and Social Sciences, 2(1), 203.
 
Kemp, L. J. (2013). Introducing blended learning: An experience of uncertainty for students in the United Arab Emirates. Research in Learning Technology, 21. http://dx.doi.org/10.3402/rlt.v21i0.18461
 
Kennedy, A. I., Mejía-Rodríguez, A. M., & Strello, A. (2022). Inequality in remote learning quality during COVID-19: student perspectives and mitigating factors. Large-scale Assessments in Education, 10(1), 29. https://doi.org/10.1186/s40536-022-00143-7
 
Khan B. H. (2015). Introduction to e-learning. In Khan B. H., Ally M. (Eds.), International handbook of e-learning, volume 1: Theoretical perspectives and research (pp. 1–40). New York, NY: Routledge.
 
Kong, S. C., Chan, T. W., Huang, R., & Cheah, H. M. (2014). A review of e-learning policy in school education in Singapore, Hong Kong, Taiwan, and Beijing: implications to future policy planning. Journal of Computers in Education, 1, 187-212. https://doi.org/10.1007/s40692-014-0011-0
 
Lee, D. K. (2016). Alternatives to P value: Confidence interval and effect size. Korean Journal of Anesthesiology, 69(6), 555–562. https://doi.org/10.4097/kjae.2016.69.6.555
 
Li, Y., Nishimura, N., Yagami, H., & Park, H. (2021). An Empirical Study on Online Learners’ Continuance Intentions in China. Sustainability, 13, 889. https://doi.org/10.3390/SU13020889.
 
Manzon, M. (2014). Comparing Places. In: Bray, M., Adamson, B., Mason, M. (eds) Comparative Education Research. CERC Studies in Comparative Education (pp. 97-137). Springer, Cham. https://doi.org/10.1007/978-3-319-05594-7_4
 
Marshall, S. (2006). E-Learning Maturity Model Version Two: New Zealand Tertiary Institution E-Learning Capability: Informing and Guiding E-Learning Architectural Change and Development Project Report. Report to the New Zealand Ministry of Education. Wellington.
 
Masoumi, D. (2010). Quality in E-learning Within a Cultural Context. University of Gothenburg.
 
Mazandarani, A. A., Taghaddomi, M. S., & Armand, M. (2023). The transferability of university textbook standards to e-learning environments: the perceptions of university professors. University Textbooks; Research and Writing, 27(52), 116-142. https://doi.org/10.30487/rwab.2023.2010288.1579 [In Persian]
 
Mohamed, A. (2005). Distance higher education in the Arab region: The need for quality assurance frameworks. Online journal of distance learning administration, 8(1), 1-10.
 
Niyazi, M., Barekat, G., & Bahmaie, L. (2021). Factors affecting the quality of e-learning in Farhangian University of Khuzestan Province: Based on Grounded theory approach. Educational Development of Judishapur, 12(1), 235-247. doi: 10.22118/edc.2020.239359.1450 [In Persian]
 
Ohani Zonouz, V., Yari Haj Ataloo, J., Adib, Y. & Daneshvar, Z. (2022). Designing Quality Evaluation Model in the Electronic Curriculum in Higher Education. Education Strategies in Medical Sciences, 15(4), 389-400. [In Persian]
 
Pape, L., & Wicks, M. (2009). National Standards for Quality Online Programs. International Association for K-12 Online Learning.
 
Pedro, N. S., & Kumar, S. (2020). Institutional support for online teaching in quality assurance frameworks. Online Learning24(3), 50–66. https://doi.org/10.24059/olj.v24i3.2309
 
Powell, A., & Walsh, S. (2019). National standards for quality online courses, 3rd ed. Quality Matters. Virtual Learning Alliance. Retrieved from https://www.nsqol.org/wp content/uploads/2019/09/National-Standards-for-Quality-Online-Courses-Catalog3 2019.09.01.pdf
 
Pourkarimi, J., & Alimardani, Z. (2021). Phenomenological Analysis of the Factors Affecting Interactions in the E-Learning Environment. Research in School and Virtual Learning, 8(3), 35-46. https://doi.org/10.30473/etl.2021.49623.3106 [In Persian]
 
Purnamasari, N., Siswanto, S., & Malik, S. (2020). E-module as an emergency-innovated learning source during the COVID-19 outbreak. Psychology, Evaluation, and Technology in Educational Research, 3(1), 1-8. http://dx.doi.org/10.33292/petier.v3i1.53
 
Rodrigues, H., Almeida, F., Figueiredo, V., & Lopes, S. L. (2019). Tracking e-learning through published papers: A systematic review. Computers & Education, 136, 87-98. https://doi.org/10.1016/j.compedu.2019.03.007
 
Romiszowski, A. J. (2016). Designing instructional systems: Decision making in course planning and curriculum design. Routledge.
 
Shraim, K. (2020). Quality standards in online education: The ISO/IEC 40180 framework. International Journal of Emerging Technologies in Learning (iJET), 15(19), 22-36. http://dx.doi.org/10.3991/ijet.v15i19.15065
 
Singh, R. (2019). E-learning: Learning Management System for the Next-Generation Libraries. INFLIBNET Centre, Gandhinagar. http://ir.inflibnet.ac.in/handle/1944/2343
 
Singh, H., & Reed, C. (2002). Demystifying e‐learning standards. Industrial and Commercial Training, 34(2), 62-65. https://doi.org/10.1108/00197850210417546
 
Stanley, J. (2015). Learning outcomes: from policy discourse to practice. European Journal of Education, 50(4), 404-419.
 
 
Taghaddomi, M. S., & Mazandarani, A. A. (2023). The best practice in developing asynchronous online educational materials: the attitudes of educational materials experts in developing university textbooks for the students of the humanities. University Textbooks; Research and Writing, 26(51), 144-168. 10.30487/rwab.2023.1978081.1539 [In Persian]
 
Ubachs, G. & Konings, L. (Coord.) (2016). Quality Assessment for e-Learning: a Benchmarking Approach (third edition). Heerlen: European Association of Distance Teaching Universities (EADTU). Retrieved from http://excellencelabel. eadtu.eu/images/E-xcellence_manual_2016_third_edition.pdf
 
UNESCO (2024, January). Digital learning policies. Unesco.org. https://www.unesco.org/en/digital-education/policies
 
Université Virtuelle Africaine (2014). AVU Quality Assurance (QA) Framework: for Open, Distance and eLearning Programmes. Retrieved from: http://www.avu.org/avuweb/wp-content/uploads/2017/07/QA_FRAMEWORK.pdf
 
Van Herk, H., & Poortinga, Y. H. (2012). Current and historical antecedents of individual value differences across 195 regions in Europe. Journal of Cross-Cultural Psychology, 43(8), 1229-1248. https://psycnet.apa.org/doi/10.1177/0022022111429719
 
Weber, L., Mahfooz, S. B., & Hovde, K. (2010). Quality assurance in higher education: A comparison of eight systems. World Bank.
 
Zaharias, P. (2008). Cross-cultural differences in perceptions of e-learning usability: An empirical investigation. International Journal of Technology and Human Interaction (IJTHI), 4(3), 1-26. https://doi.org/10.4018/jthi.2008070101
 
Zawacki-Richter, O., & Qayyum, A. (2019). Open and distance education in Asia, Africa and the Middle East: National perspectives in a digital age. Springer Nature.
 
Zuhairi, A., Raymundo, M.R.D.R. & Mir, K. (2020). Implementing quality assurance system for open and distance learning in three Asian Open Universities: Philippines, Indonesia and Pakistan. Asian Association of Open Universities Journal, 15(3), 297-320. https://doi.org/10.1108/aaouj-05-2020-0034
دوره 7، شماره 4
پاییز 1403
صفحه 3227-3249

  • تاریخ دریافت 25 دی 1402
  • تاریخ بازنگری 12 اسفند 1402
  • تاریخ پذیرش 06 اردیبهشت 1403