Teaching Evaluations

I believe that the performance of a professor should be open and transparent, giving students (the ultimate customers) information on their teacher. As a data scientist, I love the numbers and analysis aspect and as an educator I love the student feedback as it helps me to identify what is working and what isn’t so that I can constantly improve my teaching. The following tables and charts show my most up-to-date teaching evaluations and performance data from students to allow you, the future student, to make an informed decision regarding your education.

The data encompasses Fall 2000 – Spring 2020 with an average teacher rating of 4.22 and a standard deviation of 0.47 over 133 classes to date. The average uses the scale of Excellent (5.0), Above Average (4.0), Average (3.0), Below Average (2.0) and Failing (1.0). The standard deviation represents the amount of variation around the average. A low standard deviation represents uniformity among student raters whereas high standard deviation represents a disparity or a wide dispersion of ratings, e.g.; some loved the class where others did not. Assuming a normal distribution of ratings (e.g.; a bell-shaped curve) we would expect 67% of values to be within one unit of standard deviation around the average (4.22 ± 0.47) for a range of 3.75 to 4.69 and 95% of values to be within two units of standard deviation around the average (4.22 ± 2*0.47).

From the data, my teaching evaluations have remained fairly consistent throughout my career and have shown steady improvement since joining UT Tyler for the 2015-2016 academic year (AcadYear=2016). For comparison purposes, here is a link to the UT Tyler faculty evaluations that are required by Texas 81(R) HB 2504.

The table below contains all of the courses I have taught at UT Tyler, the number of times I have taught the course, the average teacher rating on a 5.0 scale, the standard deviation and the average grade (GPA) of students in the course. Research has shown that teaching evaluations are positively correlated to course grades and class attendance [1]. You will notice that my math-oriented or mentally challenging courses will have lower teacher ratings (e.g.; Capstone, Data Analytics and Database), consistent with educational research. Courses that are more conceptual with topics already familiar to students will have higher ratings.

Coursename Num times taught Avg Rating Std Deviation Avg GPA
Business Information Systems 5 4.46 0.27 2.69
Capstone 7 3.82 0.58 3.40
Data Analytics 8 3.80 0.45 2.77
Database 3 3.86 0.46 2.52
Design of MIS 3 4.49 0.08 3.32
eCommerce 4 4.53 0.17 2.99
ERP Architecture 6 4.51 0.07 3.25
Sports Data Analytics 3 4.29 0.31 3.21
Systems Analysis and Design 1 4.84 0.00 2.59
Telecommunications 1 4.50 0.00 3.00

If we were to perform a linear regression on the grades given for each individual class versus the teacher rating, we would come up with the following equation:

Teacher Rating = 0.263A + 0.216B + 0.163C – 0.058D – 0.427F

I just need to plug in the number of A, B, C, D and F grades from my class and I can estimate the teacher rating with good accuracy. This is fascinating because I can predict what the teacher rating will be based on the letter grades with an R2 of 85.6% meaning the model has a good fit with a p-value = 1.07E-11. Further, from the model the letter grades of A, B and C will increase the teacher rating, while D and F reduces it. In fact, it takes two letter grades of B to counteract one F. Also, the grades of A and B have statistical significance meaning that those grades are the best predictor of teacher rating.

The other fascinating aspect I find in the data is something I call Spring semester bias. Students in Fall semesters seem to give their professors higher teacher ratings than in the Spring. While I could speculate as to the reasons, my ratings are not alone in this phenomenon (x̄=4.53, σ=0.14, n=22 for Fall semester and x̄=3.88, σ=0.50, n=20 for Spring semester at UT Tyler). Also note the higher standard deviation in Spring courses. An observant student might argue that more challenging courses are offered in Spring. To analyze this, I will focus on just Business Information Systems which I teach identically in Fall and Spring. For this class, (x̄=4.42, σ=0.12, n=2 for Fall semester and x̄=4.02, σ=0.00, n=1 for Spring semester at UT Tyler). While the number of data points too low for statistical significance, the difference in ratings between semesters at least anecdotally points towards a Spring bias.

Looking at it from another perspective, the scatter plot below shows teacher rating versus GPA for Fall and Spring semesters focusing on the statistically significant grades of A and B (GPA ≥ 3).

From this chart, a clear relationship emerges that perfectly captures the Spring bias. In this chart the squares represent Fall classes and the circles represent Spring classes. For Fall semester (squares), note the evenness of teacher ratings (slope=0.2%) versus Spring semester (circles), where an obvious positive slope exists (slope=19.4%). It would appear that students in the Fall are fairly uniform in their assessment of teacher rating, independent of their anticipated grades. However, students in the Spring would appear to attach their anticipated grade to their assessment of the professor’s performance.


As you can see from the data, my goal of maintaining high teacher ratings aligns with your goals to succeed. However, I won’t make it easy because I genuinely want you to learn from my field in order to be a successful and productive member of industry. The main takeaway is if you work hard, immerse yourself in the material, do every extracurricular opportunity offered and try your best, you will have a good experience in my class.