Usability of online courses registration systems: empirical study from Saudi Arabia on ODUS plus

Background: The usability assessment of the system reflects the degree of user satisfaction. Since the eighth decade of the last century, manymodels and studies dealt with this subject. Methods/Findings: This study focuses on the usability of Online Courses Registration Systems (OCRS) and proposes a model for measuring their usability. The proposedmodel is prepared based on six criteria and 21 questions. This model is used to evaluate one of the OCRS (called ODUS Plus) used by students for registration of their courses at the University of Jeddah, Saudi Arabia. 370 students participated in the experimentation. Improvements/Application: The result shows that the level of student satisfaction is just below the importance of the system.


Introduction
An Online Course Registration System (OCRS) allows the university students to sign up and register for courses that they can study in each semester, based on their area of study and the educational rules. Such an online system can be more beneficial than the orthodox written applications submitted by post or telephone registration as it is more versatile, accurate and significantly reduces the amount of time, money and paperwork involved. As most universities and educational institutions have a very large range of courses and students across a number of faculties, the provision of online registration is a key benefit.
In human-computer interaction (HCI), usability has to be considered prior to any feature of the prototype being built (1) . Usability is to create a design that puts the users at the centre of it. In this way, the designer should ensure that their goals and needs are met, resulting in a product that is both efficient and user-friendly. Web usability is considered as a common issue in the past few decades. There is a common interest for Web usability and several international and national organizations are developing guidelines for Web usability. The International Standardization Organization designed a general framework for Web usability (2) ; Also, the U.S. Department of Health and Human Services defined a usability guideline which could be applied in the context of e-health (3) . Each guideline has strengths in particular areas. Bevan (4) concluded that no set of guidelines could be judged as perfect because different audiences have different needs. Bevan (4) implied that when the target audience is identified, the corresponding guidelines and themes can then be implemented in order to remain relevant.
In this paper, we review existing studies on usability evaluation models to establish the appropriate scope and factors to be considered when evaluating the usability of Online Course Registration System. Through the analysis of prior research and questionnaires, the most appropriate usability criteria for OCRS are selected. The selected criteria are used to evaluate an OCRS which is ODUS Plus.
The remainder of this paper is structured as follows. Section 2 presents a literature review and analysis of the previous works that studied the evaluation of Websites' usability. Section 3 presents the proposed usability model for evaluating Online Course Registration systems. Section 4 presents the evaluation' results and discusses them. Finally, section 5 presents the conclusions and future research works.

Related Works
Usability has multiple definitions; in HCI, the term relates to an accessible user interface that makes a system easy for both to learn and use (5) . In Software Engineering (SE) of ISO 9126-1, the definition of usability focuses on the software product, which should be learned, operated and understood by the user, as well as being attractive to users and compliant to conditions (6) . Meanwhile according to ISO 9241-11, usability concerns the ability of a product to be used by certain users to achieve particular goals easily and effectively within a specified context (7) . The first attempt to implement a framework for usability was introduced by Shackel in 1986 (8) who divided usability into four different elements: effectiveness, learnability, flexibility and attitude. Although there have been many years of usability research, there is still a significant gap which concerns the question how to select the suitable technique to evaluate the usability of Websites. Furthermore, there is a lack of feedback with suggestions to correct the identified problems (9) .
Tables 1, 2, 3 and 4 presents the usability models reported in the literature, it shows that the System Usability Scale (SUS) has good reliabilities value reaching 0.90. Furthermore, the resulting SUS model is better when applied to bi-dimension models rather than un-dimension models. For these reason, the analyses conducted in this study is based on the standard SUS item contribution scores. The score directions of this study were consistent with 4 point (10) . Nakamura et al. (9) defined the usability and the User Experience (UX) evaluation techniques in the context of Learning Management Systems (LMSs) through a systematic mapping study. Their results identify the techniques and their characteristics, including type, origin, availability and restrictions, as well as performing method and learning factors. However, there are still some gaps related to certain features and techniques, such as the lack of suggestions which allow to correct the problems that were highlighted.
The Usability Metric for User Experience (UMUX), has 4-items. UMUX measures the usability as one of the components in a multicomponent software assessment suite. The UMUX validation process in a test of two corporate software systems with different levels of usability shows that the UMUX score variance is substantial. However, major qualitative differences between the tested systems were found; making additional UMUX testing on more comparable systems as critical (11) .
The severity of usability problems was rated across nine usability studies by independent multiple evaluators, based on their personal judgement, rather than on data-driven assessments. A single study showed a positive correlation between problem frequency and severity, and the average correlation across all the studies was almost zero; suggesting that problem severity and problem frequency should be treated independently (12) .
A wide-ranging overview of usability evaluation methods and detail a new perspective on the issue. The study (13) used a systematic mapping review protocol to depict 215 out of 1169 studies which were used to detect the various usability evaluation methods for the process of software development. The study's main aim was to support and inform decision-making in the choice of a suitable technique for usability evaluation, and its findings indicate that although there are several usability evaluations methods, the determination of the most suitable technique for a particular scenario presents a difficult decision.
From Tables 1, 2, 3 and 4 , we observe that the measures ease of use, usability and satisfaction are the most used in the literature. The ease of use and the usability are vital in the model of (16) . Furthermore, (18) on the usability and its sub-measures learnability, efficiency, memorability, accuracy and the subjective satisfaction. In (19,20,32) the importance of the learnability effectiveness and the efficiency in measuring the usability has been studied. The satisfaction is the subject of measurements of all the works presented in Tables 1, 2, 3 and 4. There are several usability models reported in the literature. Most of them concerns the Web sites in general and none of them is specific for Online Courses Registration Systems (OCRS). The next section presents our model for measuring the OCRS usability.

Model for usability of online courses registration systems
There are two types of usability evaluation methods. The first type is analytical methods, which includes heuristic evaluation, cognitive walkthrough, model based methods, and review methods. These methods can be made without users. The second type is known as empirical methods which require the explicit involvement of users. This includes Observation usability tests, query (survey and interview) and Controlled Experiments. Because of the limitations of the analytical methods, such as its subjective analysis, some usability experts cannot find all the usability problems, therefore, empirical methods are required. The measurement suggested model is based on an empirical method which specifically surveys more users. The critical success factors of this type of measurement should be carefully prepared and tested and collected data should be carefully analysed.
Any measurement model consists of two main elements, namely, the criteria and the relationship among them. In reviewing previous studies, it can be concluded that measuring the usability of Online Courses Registration Systems can be built upon six main dimensions. These dimensions include effectiveness, efficiency, learnability, friendly, satisfaction and wonderful. Each dimension can be measured through some aspects of the criteria which work together to make indicators. For example, the quick and easy learning of the system, the ability to remember it and the possibility that the user becomes proficient in its use in a short period are all variables that measure the ease of learning. The suggested measurement model ( Figure 1 and Table 5 ) has multi layers to measure the usability of Online Courses Registration Systems.  (22) while other studies claim that this midhttps://www.indjst.org/ pointing (or neutral position) can lead to a clustering of responses at the mid-point and thus leads to potentially misleading data. Simple empirical tests can confirm the latter view; when people say they are neutral, they will usually lean one way or another when pressed, with the proportions leaning either way being directly comparable to those who expressed a positive or negative (i.e. not neutral) opinion in the first place. Clearly, midpoints are also seen in the 7, 9 and 11 point scale, and may provide nuanced results. Many researchers (36) believe that meaningful customer responses are better evaluated without a midpoint. 4-point scale provides a simple and measurable reporting structure when asking relatively straightforward questions; for example, how many people agree vs. disagree with a particular notion. Another key advantage of a 4-point scale is that it can be seen to encourage participants to give their answers more rapidly in response to how they feel about a particular question, without hesitating or cogitating; it is also understood to be particularly useful for questions around rating services (37,38) .
Incorporating a neutral or average option in the midst of the four can make the customer hesitate or reluctant to give a straightforward answer as they would have an additional option to contemplate. In other types of surveys, many choose the neutral button out of ease alone, leaving the researcher with potentially misleading data. With a neutral point added, the decision on which button to click can become more of a prolonged process, as opposed to a natural reaction.

Data Collection and Analysis
To achieve the main objective of this research, data was collected from a representative sample of the student community of University of Jeddah in Saudi Arabia. There are approximately 8000 students in this community and the sample size was in excess of 367, producing a 95% degree of confidence and 5% confidence interval (margin of error), coinciding with the statistical measurements.
There were a total of 370 responses from the students, the details of which were distributed in the following table ( Table 6). The Table 6 shows the distribution of responses according to the level of the students' education, which reflects the level of experience and the students' interaction with the academic system of ODUS. The students with higher education have more experience in dealing with the system. Students who were most familiar with the system, registered for courses or deleted courses. They also inquired about available courses, retrieved information from the system and also delved into other functions of the web interface.
https://www.indjst.org/ The Reliability of our Model The reliability of any measurement model refers to the consistent measure among the elements of the model. Cronbach's alpha is one technique of measuring the strength of that consistency. Cronbach's alpha is a measurement used to assess the reliability or internal consistency of a set of scale or test items ( Table 7). This study uses two measurements; the first allows to evaluate the level of importance of criteria, while the second allows to evaluate the activeness of the criteria (Validity).

The Results of Statistical Analysis
The first dimension of the questionnaire is associated with how aware the student of the usability of the system, in terms of the specifications and characteristics that make it simple to use. The second dimension concerns the extent to which the specifications have been validated. The results are shown below: • The response of the average student on the importance of usability are higher than the average of these specifications' validation where the means were 2.89: 2.53; see Figure 2.
• The variance rate of students' responses to usability importance is low (0.003569), indicating a low dispersion of their responses, which shows that students are not satisfied with the levels of verification in the system, see Figure 3.
There are gaps between the values of student's awareness of usability dimensions and the values of usability dimensions. Figure 4 illustrates that these gaps are very high and it can be concluded that the students were dissatisfied with the usability of ODUS plus.
• There is a significant statistical difference between the current situation and the desired situation. We used Z-test to compare the current situation and the desired situation of ODUS Plus. Table 8 shows that there is a significant statistical difference between the two situations. Therefore, this is further evidence to validate the conclusion that the ODUS Plus did not satisfy the desires of the students sufficiently enough.
• The null hypothesis which states that there are no differences of statistical significance between the level of awareness of the importance of usability and the level of achievement is rejected. In fact, the P-value = 2.46 which is higher than the value of α =0.05, shown in Table 9. Therefore, this aligns with the alternative hypothesis which confirms that there are differences of statistical significance, which reflects the students' awareness of the importance of usability, but they did not find the acceptable satisfaction level in the system. This leads to the conclusion of the students' dissatisfaction with the system. https://www.indjst.org/

Conclusion
This study focused on the analysis of Online Courses Registration Systems (OCRS) usability. We used multiple dimensions for evaluating the usability. This allows to observe the efficiency, effectiveness, ease of learning, and the ease of use of OCRS. In addition, our study measures the level of familiarity, satisfaction and admiration of OCRS. In this study, the six dimensions of usability test of OCRS were translated into 21 questions. We used variables representing the learners' awareness of the usability importance, as well as their achievement in the OCRS.
Accordingly, a questionnaire was prepared to collect the opinions of the students about the ODUS plus system (a case of Online Courses Registration Systems). The data are collected from a sample size of 370 students who used the online registration system (ODUS Plus). The statistical analysis of the collected data indicates the dissatisfaction of students on the usability of ODUS Plus. Furthermore, the result shows that the level of students' satisfaction is just below the importance level of the system. The students' satisfactions and the importance of the online registration system could be enhanced by considering the following limitations.
• The development of Online Courses Registration Systems should consider not only recommendations of Web experts and educational experts but also it should consider the opinion of the students which are the end users.
• More training of students on Online Courses Registration System enhances their satisfaction when using the system.
• The development of an adaptive version of the Online Courses Registration System of the system will enhance the student satisfaction and the importance of the system. It is important that the system could suggest the courses which are more appropriate for the student by considering his/her background, learning style and so on.
Besides working on the limits of Online Courses Registration Systems, futures works could focus on: • Collecting the weaknesses of the ODUS Plus and find ways to improve it.
• Generalizing the experimentation by analyzing and comparing the Online Courses Registration Systems used in Saudi Universities in order to identify their strengths and weaknesses.
• Designing and developing an adaptive version of the Online Courses Registration System ODUS Plus.