Wednesday 6 May 2009

Summery of Two Evaluation Studies in Educational Technology

Here is a summery of two evaluation studies in educational technology. In the first stufy we focuse in the purpose and methodology used, however in the seconed study we summerize the technological features that have been evaluated.

Monday 4 May 2009

Comparative and non-Comparative Study



Here is a summery of two studies in the field of educational Technology :
  • Comparative study ( Perception and performance study) :
Comparative Analysis of Learner Satisfaction and Learning Outcomes in Online and Face-to-Face Learning Environments

Scottd. Johnson, Steven R. Aragon, Najmuddin Shaik, & Nilda Palma-Rivas
University of Illinois at Urbana-Champaign


  • Non-Comparative Study:

Evaluation of the development of metacognitive knowledge supported by the knowcat system

Manoli Pifarre´ . Ruth Cobos
Association for Educational Communications and Technology 2009


The presentation:
http://www.slideshare.net/techsqu/comparartive-and-noncomparative-study


The summery:
http://www.scribd.com/doc/14942989/Comparative-and-nonComparative-Study

Saturday 2 May 2009

Evaluation of CAI

This servey aims to evaluate computer-based instruction software.

the survey is located in this link http://www.surveyconsole.com/console/TakeSurvey?id=559055



Reference: http://www.edutek.net/biol106/EDUSOFT-Appendix.html

Models of Evaluation in Educational Technology

Evaluating an E-learning Website using a combination of two models: ACTIONS Model and BadrulKhan Model.

http://www.slideshare.net/techsqu/evaluation-an-elearning-website

Saturday 14 February 2009

Evaluating Instructional Technology Implementation in a Higher Education Environment

Summery of research:


Overview:

The research first reviews the literature and describes the methods used in a myriad of evaluation studies in instructional technology. The review result on the following three conclusions:
(1) Multiple data collection methods have been used to collect information from multiple sources;
(2) Various evaluation models or approaches have been followed; and
(3) There are a common set of problems and concerns about past evaluations of technology-enhanced instruction.
Then it provides a concrete example of evaluating a campus-wide learning technology effort (the SCALE Project) at the University of Illinois at Urbana-Champaign. This evaluation spanned three years and used multiple methods.

Evaluation methodology:
Purpose:
Since the research serves different clients it has more than one objective. The research has defined three main client groups: (1) the campus administration, (2) involved instructors, and (3) the Alfred P. Sloan Foundation (the funding agency). These objectives are:
• Evaluating the impact of ALN (Asynchronous Learning Networks) on professors and students
• Understanding the economic implications of ALN.

Evaluation approach:
To answer the evaluation questions, the research adopted the mixed-method approach. Following a document review, both qualitative data and quantitative data were collected. Quantitative data took the form of survey results, records of use, achievement gain scores, and an extensive cost-benefit analysis (which is reported in a subsequent article). Qualitative data primarily involved interviews. Some data collection efforts had both qualitative and quantitative elements. For example, the evaluation of computer conferencing involved tallying interactions between various course members as well as a qualitative content analysis of the interactions. Some impact and efficiency data were also collected during Years one and two.

Instrument:
Student surveys:
To assess student attitudes and perceptions about the use of ALN, students involved in the sponsored courses are surveyed. a “Conferencing” survey was administered to students enrolled in SCALE-sponsored courses wherein conferencing software was the primary application, and a “Web” survey was administered to students in courses primarily using the Web.

Post-course instructor surveys:Instructors were asked questions about the time commitment of, level of satisfaction
With, and support required for teaching courses with ALN.

Computer support personnel surveys:
Respondents were asked about the
training they had received, the types of question students and instructors asked them, and
their satisfaction with various learning technologies used in SCALE-sponsored courses.

Student and TA (teaching assistant) group interviews:The TA interviews were conducted without students or professors present and focused on issues of computer accessibility, ease of use, student satisfaction, and perceived instructional benefits of using ALN

Instructor interviews:
An individual one-hour interview focused on professors’ perceptions of ALN effects on certain quality indicators of education. These indicators include quality and quantity of student-to-student interaction; quantity and quality of student-to-instructor interaction; and the sense of community fostered within classrooms.
During year three the focus of all data collection (instructor interviews included) was on assessing instructional efficiencies. During these interviews, information was collected on instructors’ salary and time allocation for teaching, cost per student for each course (with and without ALN), and the infrastructure cost of running an ALN course.

Gains in student achievement:
The evaluation team undertook several quasi-experimental studies during the first two years. These looked at achievement score differences between (1) ALN and non-ALN sections of the same course; (2) semesters taught with ALN and those taught without ALN in the same course (i.e., historical comparisons); and (3) similar courses where one of the professors used ALN and the other taught without ALN.
Content analysis of the SCALE instructors conference:
A content analysis of the postings to the conference was conducted as part of the evaluation. The following five categories emerged from the postings made to this conference: (1) announcements and sharing of information; (2) specific technical assistance questions and answers; (3) sharing of best practices; (4) philosophical issues of interest; and (5) general suggestions to the SCALE project staff for improvement.

Course conferences:
The evaluation team monitored the student computer conferences in several courses throughout the evaluation. The purpose of the monitoring was to tally student and instructor use as well as determine the type of interactions that occurred.


Research resource: Cheryl Bullock and John Ory, American Journal of Evaluation 2000; 21; 315

Wednesday 6 August 2008

web 2.0 tools



the main features are interacting...

social connections.....and sharing

Web 2.0




What are web 2.0 tools?

Web 2.0 – isn’t software, hardware or a concept, rather is a common thought or an idea which is also an improved form of Web 1.0. The term has been in occasional use for several years, but now the concepts of RIA (Rich Internet Applications that includes Flash, Ajax etc), SOA (Service Oriented Architecture that includes feeds, RSS, web services and mashups) and Social Web (that includes tagging, wikis, podcast, blogging, vlogs etc) have put the world wide web into a renascence development.

see the video...