E-readiness in Kenya 2006


Research Objectives

The main objective of this study was to assess the level of preparedness of Higher Education (HE) institutions in Kenya to use Information and Communication Technologies (ICT) in teaching, learning, research, and management. Indirectly, it also assessed the capacity or readiness of these institutions to use electronic learning (e-learning) to improve quality of education and ultimately increase access to higher education in the country. The effective use of ICT in higher education institutions would also ensure that the Kenyan tertiary level workforce effectively participates in the emerging global knowledge economy.

Terms of Reference (TOR) for the study:

  • Carry out a diagnostic assessment of the overall e-readiness of 17 universities, eight middle-level colleges (including polytechnics), and five research institutions that are members of Kenya Education Network (KENET) with a particular focus on the use of ICT in teaching, learning, and research.
  • Develop an e-readiness assessment framework and indicators appropriate for Kenyan Higher Education institutions. The framework would be based on the Center for International Development (CID) tool titled, "Networked readiness guide for developing countries."
  • Create a database of existing core institutional demographics and ICT infrastructure, including the information technology (IT) applications, in each member institution that could be updated on-line.
  • Identify the critical issues that confront member institutions and impede the adoption of ICT in teaching, learning, research, and management.
  • Organize at least two stakeholders' workshops for vice chancellors or heads of member institutions, at the start and end of the study, to discuss the findings of the diagnostic e-readiness assessment.
  • Present the findings of the e-readiness survey to a wider group of stakeholders including the leadership of KENET member institutions, faculty, students, senior government officials, private sector, and development partners at a mini-convention or conference.
  • Prepare a final e-readiness report for KENET to be distributed to all stakeholders.

This survey achieved all of the above terms of reference.

The survey was conducted by the following team of researchers:

  • Professor Meoli Kashorda, Principal Investigator and research team leader
  • Professor Timothy Waema, ICT and strategic management researcher and deputy team leader
  • Professor Mary Omosa, ICT and society researcher
  • Eng. Victor Kyalo, ICT infrastructure researcher

This survey was supported by research grants of $54,000 from the Rockefeller Foundation ($30,000) and Ford Foundation ($24,000). The grants were obtained through the Kenya Education Network, a Trust created in 1999 by Kenyan universities to provide affordable Internet services to its member institutions. All the 25 higher education institutions surveyed are members of KENET.

Assessment framework and key results

The study used a modified diagnostic e-readiness assessment framework containing a set of 17 ICT indicators grouped under the following five categories:

  • Network Access (four indicators - Information infrastructure, Internet availability, Internet affordability, Network speed and quality)
  • Networked Campus (two indicators - Network environment, E-campus)
  • Networked Learning (four indicators - Enhancing education with ICTs, Developing the ICT workforce, ICT Research and innovation, ICTs in libraries)
  • Networked Society (four indicators - People and organizations online, Locally relevant content, ICTs in everyday life, ICTs in the workplace)
  • Institutional ICT Policy and Strategy (three indicators - ICT strategy, ICT financing, ICT human capacity)

The framework was derived from an e-readiness assessment tool originally developed by the Center for International Development at Harvard University. The new e-readiness framework contains two new categories of indicators (networked campus and institutional ICT policy and strategy); seven new indicators; and over 60 new sub-indicators specific to higher education institutions. However, the new framework is similar to the CID readiness assessment in that it is diagnostic and stages each of the indicators on a scale of 1 to 4, where 1 represents unprepared and 4 the highest degree of readiness. Using a diagnostic e-readiness framework makes it easy for the results to be used in institutional ICT strategy development and to monitor progress of ICT strategy implementation. The survey therefore identified a set of 15 strategic ICT sub-indicators for higher education institutions that could be monitored by the institutions on an annual basis, and that are critical for determining the degree of readiness for ICT.

Data collection and analysis

Two sets of detailed questionnaires were used to collect data, namely:

  • A hard facts questionnaire that was completed by heads of ICT and other senior university administrators such as finance managers and academic registrars.
  • A perceptions questionnaire (field user survey) that was completed by students and staff in each of the 25 higher education institutions surveyed.

The questionnaires were administered to 25 KENET member institutions that included 17 universities and eight tertiary institutions. Sample sizes for the perceptions were determined to be statistically significant for each of the higher education institutions and also to capture the diversity of the students and staff (e.g., areas of study, year of study, gender, etc.). All data (hard facts and survey data) was entered into a Web-based database by students from the different universities and is available to authorized users of the member institutions. Completing the comprehensive hard facts questionnaires was very challenging because most of the higher education institutions do not have integrated information systems containing student, staff, programs, and financial data. The data was not available to the heads of ICTs and was collected from different managers of the institutions. The process of collecting and cleaning the hard facts data was started in August 2006 and completed in January 2007. Data analysis used a comprehensive staging framework developed by the research team. The researchers staged each of the sub-indicators. The indicator stage was derived by a simple average of the associated sub-indicators.