The Country's Most Ambitious Digital Learning Project

Amidst the cool technology demonstrations, shiny gadgets, and debates about online learning, it's essential not to overlook the country's most expensive -- and perhaps most ambitious -- initiative to use digital technology.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Educators from coast-to-coast will celebrate the nation's first Digital Learning Day on Wednesday. Amidst the cool technology demonstrations, shiny gadgets, and debates about online learning, it's essential not to overlook the country's most expensive -- and perhaps most ambitious -- initiative to use digital technology.

Just under 18 months ago, the U.S. Department of Education awarded over $330 million to two state consortia, PARCC and Smarter/Balanced, representing 45 states and the District of Columbia, to design and implement new student assessment systems. Two smaller state consortia, Dynamic Learning Maps (DLM) and the National Center and State Collaborative (NCSC), received an additional $67 million to develop new assessments for students with significant cognitive disabilities. The new assessments, offered mostly online, will replace the current state tests given to millions of students each year in reading and math. At the time, Secretary of Education Duncan called these initiatives an "absolute game-changer" and pledged tests of "critical thinking skills and complex student learning that are not just fill-in-the-bubble tests of basic skills." In short, it's an all-out effort to significantly improve one of the weakest -- and most despised -- aspects of our nation's current educational system.

But, while it's easy to think of the consortia as "building tests," the more apt description is that they are attempting to re-invent, with heavy use of technology, the entire process of assessment. They are developing new types of assessment questions to go beyond multiple choice in conjunction with new methods to deliver, administer, score, and report on these assessments. They will delve deeply into professional development. And, together, they are also adopting common performance standards so that proficiency, which now means different things in different states, is a consistent standard across states.

Officially, the new assessments, including formative and interim tools, will not launch until the 2014-15 school year. In reality, though, most of the work needs to be fully-baked for field-testing in the 2013-14 time frame. That means the real work will take place over the next 18 months. This timeline will increasingly drive both decision-making and expenditures. Even though the consortia have generous grants, doing something quickly, for the first time, and in collaboration across many diverse states costs much more.

Many schools and districts, but not all, will struggle to develop the raw capacity -- hardware, software, bandwidth, and tech support -- to deliver online testing. Since it takes time for budgeting and procurement, districts want to know right now what the "requirements" are going to be. Yet, there's a chicken/egg situation because the consortia don't yet know the content/item types, so they can't say whether to prepare for bandwidth-hogging simulations, graphics, etc.

At the same time, we have a limited sense of schools' and districts' actual capacity. When pushed, they may find a way: As one official at a recent State Education Technology Directors Association (SETDA) event noted, in his state districts and schools felt like they were being pushed off the cliff when online testing was implemented, but in reality, the cliff was only a couple of feet high. While the consortia are developing a "readiness tool" to assess the state of technology down to a school level, they'll soon have to make a guess as to how ambitious the tech specs will be and that will then become a major constraint to development. And, that guess will have to be made in 2012 about 2015 technology. (iPads were not even around when the Department announced the grant competition.) Lower tech requirements will make schools'/districts' lives easier, but may limit amount of innovation in item types, data collection, etc. Too far towards the other extreme increases the capacity problem.

From an instructional technology and content standpoint, the enormous scope means that the process by which the consortia do their work may have large implications. For example, if the consortia specify that you must have a device with at least a 13" screen size, good luck selling a 10" iPad tablet. More importantly on the back-end, decisions about the underlying technology architecture and standards for data/content transport will also have implications for both the vendor marketplace and integration of all sorts of other data systems (reporting, analytics, student information systems, formative assessments, content repositories, learning management systems, etc.). In other words, the consortia have the potential to exert a fair-amount of market power in a market that is currently dysfunctional. Whether the consortia choose to wield that power, and whether they do it as a force for good, remains to be seen. Ideally, this will all be done with a keen eye towards interoperability, openness, and extensibility, a system design principle where the implementation takes into consideration future growth. But, designing with the future in mind may take more time, could cost more, and often entails risk -- presenting a dilemma for high-stakes development on a tight timeline.

The consortia provide a real opportunity to both understand and upgrade schools'/districts' technology capacity. As a technology director told me, "they'll buy for the testing mandate." Yet, whether this capacity will have dual-use for instruction remains to be seen. Schools could get just enough bandwidth to support testing, but have to shut down any other uses for multiple weeks throughout the year. They could also decide to acquire "secure" computer labs, but isolate these from day-to-day classroom instruction. On the good side, one of the hopes of the new assessments is that they will point instruction to more cognitively challenging and beneficial methods. To the extent that these are technology-based, students must have access not just for testing, but also for instruction.

This may all seem to be too far in the weeds to pay attention. But like it or not, how we measure matters. The next generation of assessments will go a long way towards determining whether digital learning actually fulfills its immense promise. And this may be the best chance to get it right.

Popular in the Community

Close

What's Hot