- Programs & Services
- Resources & Publications
- Grants & Awards
- CRLT Players
Choosing Your Technology
The list below highlights technology tools commonly used to promote student learning across a variety of disciplines. Click on the technology type to see tips for implementation, possible pedagogical uses, a comparison of available tools, examples of faculty use at U-M, and additional related resources.
|Technology Type||Are you interested in...?||Example Tools|
|Online Collaboration Tools||
Major group projects in courses can require students to generate, organize and collaborate on many and/or large computer files, especially when projects involve the use of video. M+Box is a cloud-based file storage and sharing service explicitly designed for collaboration. In addition to solving storage capacity and organization issues, Box.net allows students and instructors to attach comments, tags (to facilitate easy file searches), and editable task lists in the file directory. These features provide easy mechanisms for students to manage and coordinate workflow within teams. Instructors can also use task lists and commenting features to provide feedback or directions to teams and then to monitor what has been implemented or not. Box.net can also generate a single e-mail digest per day to the instructor (site owner), summarizing all activity on the site and facilitating efficient oversight of student projects. U-M example: Melissa Gross, Kinesiology.
Group decision making is a critical component of teamwork, especially when projects require students to evaluate competing ideas. Teams often pursue suboptimal approaches to projects due to poor group process. To enable more equitable and conceptually sound decision-making, using Google Docs, instructors can shift decision making from face-to-face discussions to synchronous, text-based online discussions, during which team members are geographically dispersed. Students can simultaneously access project materials pre-loaded into Google Docs and negotiate decisions at preordained times using the commenting and chat features. Through the Google Docs for each student team, instructors can monitor group dynamics remotely, respond to misconceptions, and intervene constructively in ways that are not logistically possible when the teams met face-to-face. U-M Example: Robin Fowler, Engineering.
Small group discussions can be effective modes of active learning during lectures, whether groups are pre-assigned or based simply on where they happen to sit. Tools like Google Docs and Drawings can enhance small group discussions in several ways. First, they record and archive the artifacts of learning activities (e.g., brainstorming activities, discussion of readings, or concept mapping activities), so that students may revisit and study the core aspects of activities or discussions that may otherwise be ephemeral. Second, they allow instructors to easily monitor and provide feedback on the progress of groups, far more efficiently and effectively than is possible when circulating throughout a classroom and interrupting group conversations. Consequently, instructors can be better prepared to effectively and efficiently conduct a debrief of group discussions. The products of individual groups can also be projected and discussed as part of the debrief. Some instructors provide feedback to students on group products after class, within the Google Docs or Drawings themselves, especially if the group continues to work on the task after class. U-M Examples: Orie Shafer, Molecular, Cellular, and Developmental Biology, Mika LaVaque-Manty, Political Science, & Laurie Hartman, Nursing.
Peer evaluations are an important method of assessment when using groupwork in a class. The Comprehensive Assessment of Team Member Effectiveness (CATME) is a free, web-based peer evaluation tool that allows instructors to monitor team dynamics and intervene to solve problems as needed. Peer evaluation can be useful both to provide formative feedback to improve group dynamics throughout a project as well as to assess individual student’s contributions and to adjust grades accordingly. Peer evaluation should occur repeatedly at key milestones during group process, not just at the end of a group project. For more info, see CRLT Occasional Paper 29.
The geographical and temporal logistics of meeting outside of class can be a significant barrier to successful teamwork. Google+ Hangouts is a video conferencing feature within Google+ (a social networking application similar to Facebook), through which up to ten people can engage in synchronous online video chats. Google+ Hangouts also allow for text chats and screensharing, providing a platform for student teams to meet remotely and collaborate effectively.
|Online Video Conferencing/Chat||
LSA Instructional Support Services supports Videoconferencing and Chat for a variety of purposes. If you are part of LSA, click here to learn more about your options. If you prefer a more DIY approach, the following are some good options:
|Online Writing Tools||
|Personal Response Systems||
Prior knowledge is necessary for learning but can be problematic if it is not accurate or sufficient. It is a good practice for faculty to assess students’ prior knowledge of a subject and identify common misconceptions in order to find an appropriate entry point for introducing a new topic. By using clicker multiple- choice questions, faculty can quickly gauge students’ knowledge level. For instance, in a Fall 2006 Chemistry class at U-M, the professor started each lecture with clicker questions asking students to identify new concepts or distinguish between various new concepts discussed in the assigned readings.
Clicker technology makes it easy for faculty to check students’mastery of lecture content. The immediate display of student responses enables faculty and students to see how well students understand the lecture. As a result, faculty can decide whether there is a need for further instruction or supplementary materials. By seeing peers’ responses, students can gauge how well they are doing in relation to others in the class and determine which topics they need to review or bring to office hours.
The anonymity of responses facilitated by the clicker technology allows faculty to initiate class discussion and debate on sensitive topics that might otherwise be difficult to explore. For example, questions on controversial issues in a political science course can sometimes be met with absolute silence (Abrahamson, 1999), but the use of clickers can help change classroom dynamics. Faculty can start the class lecture or discussion by posing controversial questions and offering “common-sense” multiple-choice responses. Students’ responses, and their questions about their peers' responses, can provide an opening for class discussion. When students recognize their own opinions and co-direct a class discussion, they may feel a greater sense of ownership over the lecture and discussion. As a result, they will be more engaged in and responsible for their own learning. Also, instead of drawing conclusions from the most vocal students, the faculty member receives a far more accurate overview of opinions from the entire class. Most important, the anonymous feature of the clicker system ensures that viewpoints that might not otherwise be expressed during class discussion are given a voice.
Peer Instruction (Mazur, 1997) and Think-Pair-Share (Lyman, 1981) are cooperative learning strategies that faculty often use to probe students’ understanding of lecture content and encourage them to discuss, debate, and defend their answers during lecture. The strategy entails posing a question to students, giving them time to think and discuss their responses with a partner, and then describing the results to the whole class.
Clicker technology makes the use of these strategies feasible and manageable, even for large classes. For example, the instructor will plan for each lecture several concept questions that focus more on the analysis and evaluation of information than simple recall, rote memorization, or calculation. Students are asked to share and discuss their responses with partners. Some faculty ask students to respond twice to difficult questions, once right after they read the question and then again after they talk to their partners. The faculty member then reviews and explains varying student responses, helping them clear up their misconceptions.
Research in physics (Crouch & Mazur, 2001) shows that students’ cognitive gains from peer instruction are significant: students’scores on tests measuring conceptual understanding improved dramatically; their performance on traditional quantitative problems improved as well.
The relative ease of managing students’ responses has made the clicker system a helpful device for testing and grading during lecture. Features such as automatic scoring and record-keeping for each student enable faculty to administer all sorts of tests and quizzes in large lecture halls. For example, in one physics class at U-M, students’ responses to questions posed during lecture are scored. Students who answer the questions correctly earn points that count toward a small percentage of the course grade (allocating too many points to a clicker quiz can increase the likelihood of cheating). Moreover, with instant feedback from students, faculty can adjust the pace of a lecture and the amount of content presented, assist students in identifying their knowledge deficiency, help students re- evaluate their study strategies, and determine what additional resources they might need to provide.
With clicker technology, faculty can gather anonymous feedback on their own teaching by asking students to respond to questions regarding the lecture, class discussion, homework assignments, group activities, or the overall learning experience in the course. If used early in the term, faculty can make changes to the class that benefit students before the end of the term.
Taking attendance in a large lecture course is usually daunting, if not impossible. But with a system that recognizes each student, it is feasible and convenient for faculty to take student attendance in a large lecture. For example, students’ responses to questions asked at the beginning of the lecture often serve as a record of their attendance. The instructor can easily run reports on student responses and find out who is present or absent from the class.
Admittedly, faculty hold different views on student class attendance. Some firmly believe that being in class and listening to a lecture is an integral part of learning, making class attendance a must; others think it is not essential for learning and it can be left to the students to decide. Similarly, student opinions about mandatory class attendance vary. Some U-M students surveyed in 2006 and 2007 responded negatively when clickers were used only to check class attendance (Zhu, Bierwert, & Bayer).
There are many other creative ways clickers are being used in classrooms. Draper, Cargill, and Cutts (2002) list three: Students can use them to give anonymous feedback on their peers’ class presentations by responding to a brief post-presentation survey. Faculty can create a sense of community and group awareness by clustering people’s hobbies, habits, and preferences through student responses to anonymous surveys. Kam & Sommer (2006) note the use of clickers for campaign simulation and polling research, as well as the technology’s ability to monitor and facilitate individual and group games. In summary, the only limitation on innovative applications of clickers is the creativity of the instructor.
i>clicker is the classroom clicker system supported by LSA, College of Engineering, Public Policy, Kinesiology, Music, Information, Public Health, and the Library. The LSA Instructional Support Services (ISS) group and CAEN (Engineering) provide training on using the system.
For detailed information about support, click on the links above.
|Resource and File Sharing||
Through screencasts, instructors may deliver frequent, high-quality feedback on student work via video and audio. This technology has the potential to increase an instructor’s efficiency because it can be faster than handwriting feedback. Posting screencasts to course management systems may also facilitate earlier receipt and use of instructor feedback by students. Video and audio feedback also has the potential to be richer and more detailed than traditional written summaries of feedback or margin notes. For example, an instructor can easily provide multiple examples and concrete suggestions that would be cumbersome to write out. Furthermore, screencasts of feedback on student writing can feel more personal and engaging to students while helping to convey the experience of the reader. By highlighting and annotating specific strengths and areas for improvement on the screen, instructors can also explicitly model their thought processes and expectations for student performance and development.
Classroom Assessment Techniques (CATs) are typically brief, anonymous, and ungraded assessments of student learning in which the unit of analysis is the entire class. The “muddiest point” is a common CAT in which students briefly identify which concepts from a particular class period are unclear and why. Instructors analyze the data during or after class to identify the concepts confusing students. Screencasting provides an easy mechanism for instructors to respond to CATs and provide supplemental instruction without impinging on subsequent class time, especially if a CAT does not reveal consensus among students regarding which concepts are most challenging. Additionally, instructors may take advantage of the internet and digital multimedia resources to enhance explanations in screencasts. Although creating these screencasts represents an initial start-up cost, one quickly forms a library of teaching and learning resources that can be used repeatedly.
Novices and experts approach disciplinary problems in dramatically different ways. Screencasting provides instructors with a mechanism to explicitly model expert thinking and thought processes, whether it is analyzing a primary historical source or solving a physics problem. For example, instructors can use tablet PCs (or other peripherals that allow for handwritten annotations) to create screencasts of themselves solving quantitative problems and thinking aloud through the rationale for each step. Similarly, an instructor may create a think aloud screencapture of a close reading, demonstrating how they approach and annotate a document or interpret an image.
Copious research suggests that active learning enhances student learning. However, instructors often perceive the implementation of active learning strategies as requiring one to sacrifice content coverage during lectures. Screencasting can allow instructors to shift students’ first exposure to fundamental concepts to before class. Thus, more class time may be used for active learning and teaching critical thinking (e.g., the application and synthesis of fundamental concepts), rather than solely lecturing on basic concepts. This approach may deepen student learning without necessarily sacrificing breadth. Such screencasts can be accompanied by short assignments and/or readings that prepare students for the activities during class sessions.
Instructors can use screencasts to support student learning outside the classroom. For instance, students’ experience and facility with instructional technologies can vary widely. Instead of taking class time to orient students to required technologies, instructors can make short screencast tutorials so that students can get up to speed at their at their own pace. Screencasts can also be an effective means for instructors to frame or preview assigned readings, providing students with background knowledge or contextual perspectives to maximize productive engagement and learning.
|Testing and Grading||
When an instructor enters student grades in an online gradebook, such as the one available in Canvas or CTools, students can see their grades immediately. Confidentiality is ensured because each student on sees his/her own grades. Students have a chance to see and digest their grade outside of class, reducing the amount of in class time spent on returning papers and discussing grades. Students who have a chance to see their scores quickly have the opportunity to adjust their strategies and improve their performance. For example, if a student sees that their weekly participation grade is low, they can be aware before the next class meeting and prepare to participate more actively. The Canvas and CTools gradebooks also have an option to calculate a student’s overall course grades, so students can track their progress in the class, reducing the chance of a surprise at the end of term. Canvas and CTools gradebooks are integrated with the Assignment tool and the Quizzes (Canvas)/Test Center (CTools) tool, so grades recorded there can automatically appear in the gradebook.
At the beginning of the term or the start of a new unit, an online test or survey can be used to assess how much students already know on a topic. The level of student interest in the topic can also be gauged using these tools. This information can be used to tailor lessons to areas of greatest need and/or highest interest. If the data collected this way is shared with students in aggregate, it can help individual students see the diversity of backgrounds and interests among their classmates. If the instructor shares with students how the course has been customized based on the pre-test or pre-survey, it communicates the instructor’s commitment to creating a relevant and engaging learning experience.
Frequent practice and feedback improve learning, but students often struggle to keep up-to-date with course readings or problem sets if they are not required to demonstrate mastery until a major exam or assignment. Frequent, low-stakes online quizzes can provide motivation for students to engage with course material more regularly throughout the term, rather than cram before exams, and get regular feedback on their performance. They can also be a chance to get practice with the types of questions they will face on exams. Reviewing student performance on online quizzes is a great way for instructors to identify areas where students are struggling. Those areas can then be targeted for review in class or for other supplementary resources such as screencasts.