Recently, ClassIn held a webinar to discuss how sometimes there is too much of a good thing – sometimes school districts have adopted too many educational technology tools. In a recent study, Edweek noted that the average school district has active logins to 1400 edtech tools, up from 600 pre-pandemic. While edtech has proven to help with student engagement and outcomes, there are opportunity costs to managing so much technology.
Lack of Consistency: Disparate tools have distinct workflows, pedagogical approaches, and user interfaces. Managing multiple systems creates cognitive load for students and teachers, stresses student executive functioning, and adds switching time between learning tasks.
Integration and Interoperability: Disparate tools will have different ways of assessing student progress and achievement (raw scores, standards tagging, competency tagging, etc.) which makes it difficult to see a true picture of student/class performance and understanding. In addition, getting all tools to flow to a central gradebook in a meaningful way can be challenging.
Data Privacy & Security Risks: Each tool used in a district has its own methodologies for data privacy and security which comes with a set of risks. In addition, many of these tools have connection points between them where data is passed back and forth – each connection point could be a potential risk.
Limited Focus on Instructional Goals: Juggling numerous edtech tools can divert educators’ attention from instructional goals to administrative ones, sometimes becoming more “tech support” than educator.
Cost Concerns: Many districts opted to use ESSER funds to prepay for several years worth of access to a solution (usually in exchange for favorable discounting); as these prepaid periods come to an end, districts will need to determine if those costs can be absorbed into the normal operating budget.
When faced with the list of tools in use, it can be hard to determine what to keep and what to cull. Here are some questions to help you make those tough decisions:
Is the tool being used? Determine the acceptable usage and activation rates (usually around 40%) and put each solution to the test – does it meet the minimum usage and activation thresholds?
Is the tool having an impact on student outcomes? In a perfect world, you’d have a baseline of student performance data before the tool was implemented and then another measurement after the completion of a benchmark. But, we know we don’t live in a perfect world. Look at metrics like engagement, achievement, and qualitative feedback from students.
Does the tool make the job of teachers and tech directors easier or harder? This is where interoperability comes into play – does the tool easily work with your existing SIS and SSO systems? Can student performance data and grades easily move back and forth across systems? Do outputs from the tool help teachers develop a better understanding of student performance?
Does this tool provide a redundant capability? The lifecycle of educational tools typically follows a similar trajectory: a need is identified, a solution is developed that solves that one problem, the tool is adopted, then other, larger platforms add that capability. When evaluating your list of tools, check to see if one of your larger platforms has added the capability.
How disruptive will it be to remove access to the tool? Sometimes, when evaluating a solution, you can get “remove” as an answer to all of the above questions but still keep it. Why? Some tools are so ingrained in the day to day workflow for students and teachers that removing access would simply be too disruptive.
Given the need to consolidate systems, many districts are moving back towards large, multi-capability platforms where many functions exist in one seamless platform – no integrations needed. Districts need to be judicious about the technology tools in use, to ensure that the technology is leading to student achievement.