Session 4: Feedback & Assessment
Do you make public the evaluations/feedback you’ve gotten? Why? If yes, where can I read these?
Yes, ELIXIR members use the Training Matrix Database (TMD - https://tmd.elixir-europe.org/). The Training Metrics Database was created to enhance the data collection, storage, and visualization for the ELIXIR Training Platform. It aims to describe audience demographics, assess the quality of ELIXIR training events immediately post-event, and evaluate the long-term impact on participants. In collaboration with ELIXIR Training Coordinators, core metrics for measuring demographics and training quality were developed in the short term, as well as training impact in the long term. This data is primarily collected through feedback surveys, with some demographic information obtained via registration forms. The metrics were derived from existing practices of ELIXIR training providers and discussions with stakeholders. Users can view and filter events on the 'All ELIXIR events' page and generate custom reports on the 'Reports' page.
On the TMD platform, free text entries are not displayed because the primary focus is on presenting numerical data and statistics rather than suggestions for course improvement.
Pros:
Since individuals might not realize that their input could be publicly accessible to a global audience, excluding free text entries helps protect this information, even if it's anonymous.
Cons:
Not displaying free text means potentially valuable insights are lost. However, if such information were shown, even if not intended for instructors, it could influence potential participants by attracting (or deterring) them based on the feedback.
In general, training course feedback tends to be biased, with respondents expressing either highly positive or negative opinions. On one hand, participants might perceive a course as inadequate, while on the other, they might find it transformative. The perspectives of the average satisfied user are frequently absent in feedback surveys. Moreover, extremely negative feedback often lacks constructive suggestions for improvement, and free text comments that could provide additional context or insights are regularly missing.
What about long-term feedback on training courses?
Long-term feedback of training courses can provide valuable insights into the sustained impact and effectiveness of a training program. There are some pros and cons for long-term feedback:
Pros:
-
Long-term feedback helps to evaluate how well the training has equipped participants over time, revealing the persistence of skills and knowledge gained.
-
Insights from long-term feedback can inform the refinement of training materials and approaches, ensuring they remain relevant and effective.
-
This feedback can capture changes in participants' behavior or practices that result from the training, offering a deeper understanding of its influence.
-
Gauging the long-term benefits can help us assess the overall value of their investment in training initiatives.
-
Demonstrating long-term benefits can enhance the reputation and credibility of the training provider by showing lasting positive outcomes.
Cons:
-
Participants are less likely to engage in feedback activities long after the training, leading to very low response rates and potential biases.
-
It can be difficult to isolate the effects of training from other factors influencing participants' skills or behavior over time.
-
Collecting and analyzing long-term feedback requires more time and resources, which may not always be feasible for all training providers.
-
The time required to gather long-term feedback means that actionable insights aren't available immediately, potentially slowing down program improvements.
-
Participants' recall of their training experience might diminish over time, affecting the accuracy and reliability of their feedback.
What was the meanest feedback you ever got? How did you react?
(DW): I've been working in the data management field for over 20 years, and I found the entire course completely boring. I had to enroll in another course.
I reacted calmly because the course was clearly advertised as a beginner-level RDM training session. It seems that the person was simply in the wrong course. A careful reading of the course description would have prevented this mix-up.
I assume you let students evaluate your course or courses: do you think this is best done during the course, or after? Why?
Evaluating courses is essential for improving their content and delivery. Both during and after the course evaluations have their merits:
During the Course:
- Immediate Feedback: Allows for real-time adjustments based on participant feedback, enhancing the learning experience on the spot.
- Engagement: Encourages the course participants to be more engaged and thoughtful about the course content as they know their input can have an immediate impact.
After the Course:
- Comprehensive Overview: Provides the course participants time to reflect on the entire course, resulting in more considered feedback.
- Outcome-Based Feedback: Allows the course participants to assess the course in terms of the skills and knowledge they have gained by the end.
Ideally, a combination of both approaches can be the most effective. Gathering ongoing feedback during the course can help make immediate improvements and adjustments, while a comprehensive evaluation afterward helps inform larger changes for future iterations.
As a form of self-assessment, do you write reflections after teaching? If yes, are they public? If not, could you describe what yours look like?
(DW) Yes, I do write reflections or I have reflection discussions after a training course as a form of self-assessment. However, these reflections are typically not public and serve as a tool for continuous course improvement. Here's what the reflections usually include:
-
What Went Well: A reflection on what aspects of the course were successful, such as effective activities, strong engagement, and positive student interactions.
-
Challenges Encountered: Identification of any difficulties or unexpected issues that arose during the session, whether logistical, pedagogical, or related to the course participants understanding.
-
Verbal Student Feedback: Incorporation of any immediate feedback from the course participants that provides insight into their learning experiences.
-
Areas for Improvement: Analysis of what could be improved in future sessions, including specific strategies or methods to try.
-
Action Plan: A concise plan for implementing changes or trying new approaches in future courses based on the insights gained from the reflection.
By systematically reviewing each teaching experience, I aim to enhance the effectiveness of my teaching methods and better meet the needs of the course participants.
Your course(s) have probably been evaluated by students. Could you come up with a rule for ‘What is a useful question to ask the students?‘ for an anonymous survey sent after/during the course?
(DW) We use this survey for instance:
https://de.surveymonkey.com/r/denbi-course?sc=cau&id=000442
My rules for this survey were:
Relevance: Ensure the question pertains to specific aspects of the course that you can control or influence, such as content, teaching methods, or resources. Clarity: Use clear and straightforward language to avoid any ambiguity, ensuring the question is easily understood by all students. Actionability: Frame the question in a way that the responses can lead to actionable insights or improvements. Consider what changes you can implement based on the feedback received. Focus on Student Experience: Include questions that reflect on how the course impacted students’ understanding, engagement, and interest.
What are the common and user-friendly tools you use for your evaluations?
There are lots of different tools commonly used for evaluations and surveys, each known for its user-friendliness and effectiveness in gathering and analyzing feedback:
Google Forms : Free and easy to use; great for quick surveys and simple data analysis. SurveyMonkey : Offers advanced features and analytics for detailed surveys. Microsoft Forms : Integrates seamlessly with Microsoft 365 products, ideal for basic surveys. Qualtrics : Provides robust data analysis capabilities; suitable for more complex evaluations. Typeform : Renowned for its aesthetically pleasing and interactive survey design. Mentimeter : Useful for live feedback sessions with real-time interactive polling. JotForm : Offers a wide range of form templates and customization options. Kahoot! : Engages users with a game-based approach, often used for quizzes and feedback during courses. LimeSurvey : Open-source survey tool allowing extensive customization and flexibility. Zoho Survey : Part of Zoho's suite of applications, providing comprehensive survey features and data integration options.
How often should you have the same feedback to take it seriously?
The frequency at which you should take feedback seriously can depend on various factors, including the context of the feedback, its significance, and its source. However, here are some general guidelines:
Pattern of Consistency: If similar feedback is repeated by multiple people across different instances, it indicates a recurring issue or a consistent strength. As a rule of thumb, receiving the same feedback from at least three to five individuals can be a strong indicator that it should be taken seriously. Feedback from Diverse Sources: When feedback comes from a diverse group—such as students, peers, and supervisors—it gains more credibility. Diverse sources providing similar feedback reinforce its validity. Critical Areas: Feedback related to critical aspects of a course or service, such as content clarity, engagement, or meeting objectives, should be taken seriously even if mentioned by only a few people, as these areas substantially affect the overall quality and effectiveness. Trends over Time: Observing consistent feedback across multiple sessions, semesters, or projects indicates that the issues or strengths are not isolated incidents but rather persistent patterns that need addressing.
Ultimately, while patterns and frequency provide a useful guideline, it’s important to consider the content of the feedback and its potential impact. Even a single piece of insightful feedback can be worth acting upon if it offers significant value or highlights a critical improvement area.
Is there an AI program that does the evaluation based on a specific feedback?
Yes, there are AI-powered tools and platforms that can assist in evaluating feedback by analyzing text responses to identify common themes, sentiments, and actionable insights. These tools use natural language processing (NLP) to process and analyze large volumes of textual data efficiently. Here are a few examples:
MonkeyLearn : This platform allows you to build custom text analysis tools that can categorize feedback and perform sentiment analysis to determine the tone of the responses. Qualtrics : In addition to survey creation, Qualtrics offers AI-driven text analysis features that help identify trends, themes, and sentiments in open-ended feedback. IBM Watson Natural Language Understanding : This tool provides advanced text analysis capabilities, including sentiment analysis and keyword extraction, which can help in understanding the context and sentiment of feedback. Lexalytics : A text analytics platform that provides sentiment analysis and topic categorization, helping to prioritize and understand feedback better. Clarabridge : This tool focuses on customer feedback analysis and offers AI-driven insights into sentiment, intent, and emotion in feedback data.
These AI programs can automate the evaluation process of textual feedback, making it easier to identify key issues and opportunities for improvement without manually sifting through large volumes of data. They can help organizations and educators quickly glean actionable insights from qualitative feedback.
How useful do you think the ELIXIR questions are? How have they helped you improve your course?
(DW) Honestly, I haven't found the ELIXIR questions particularly useful in improving my course. While they do serve the purpose of collecting numbers for KPIs, they don't provide the in-depth, qualitative feedback needed to make substantial course improvements. The questions focus more on statistical outputs rather than offering insights into student experiences or areas for pedagogical enhancement. As a result, they fall short in helping to identify specific aspects of the course that could benefit from revision or innovation. However, they are important to show the success of the ELIXIR training to the world.