Expanding Evidence Approaches for Learning in a Digital World– My Critique

Expanding Evidence Approaches
for Learning in a Digital World

<< http://www.ed.gov/edblogs/technology/files/2013/02/Expanding-Evidence-Approaches.pdf >>

This report was developed under the guidance of Karen Cator and Bernadette Adams of the U.S. Department of Education, Office of Educational Technology. [These two organizations should not exist so this whole report is bogus. The US Constitution says anything not called for the Federal government to do is left up to the States or individual. The 10th Amendment to the US Constitution, “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” – See more at: << http://constitution.findlaw.com/amendment10/amendment.html#sthash.QRtUaAtY.dpuf>&gt;.    There is no mention of Education in the US constitution therefore, they should have nothing to do with it. President Obama even agrees with this, yet he bribing States to follow his lead—Race to the Top.] Barbara Means of SRI International led report development and drafting. She was assisted by Kea Anderson. Susan Thomas of S2 Enterprises LLC supported the development team with substantive editing, writing, and advice. Jeremy Roschelle, Marianne Bakia, Marie Bienkowski, Geneva Haertel, and Terry Vendlinski, also of SRI International, provided advice and insightful feedback on drafts. Eryn Heying provided research assistance and Brenda Waller provided administrative assistance. Patricia Schank and Ashley Lee supported the project website and provided graphics assistance. The report was edited by Mimi Campbell. Kate Borelli produced graphics and layout.

______________________________________________

Since I want to explore the panel’s makeup I will take a brief look at SRI. From their website:

<< http://www.sri.com/work/timeline-innovation/landing-education.php?timeline=education >>

Timeline for SRI Education

1969 Head Start Program

The federal government launched Head Start, one of the longest-running efforts to address systemic poverty in the U.S., in 1965 to promote school readiness in low-income children. More than 22 million preschool children have participated in the program since its inception. [Yes, it has been a utter failure. The percentage of people in poverty is about the same as it was in the 1960s. The supporters say that there would be more in poverty without it. Regardless, of course the sheer numbers of people in poverty are more now than then. I do not call it a success. There should be no people in poverty in 50 years, not more.]

From 1969 to 1975, SRI’s assessment of the Head Start program resulted in groundbreaking methodology that has been widely replicated in the U.S. and abroad. [Its assessment should have been to report it as a failure.]

SRI also conducted the national evaluation of Follow-Through, the program that provided follow-up services to children who had participated in Head Start. [The very fact that this was needed points to the failure of Head Start. If Head Start was truly successful then nothing else should be needed.]

1996
Natural Language Speech Recognition

SRI’s natural language speech recognition software was the first to be deployed by a major corporation when Charles Schwab & Co., Inc. used it for over-the-phone stock quotes in 1996. SRI spun off market leader Nuance Communications to commercialize the technology, which developed applications in travel reservations, product ordering, banking, and more. [Funny, Texas Instruments (TI) had Watson in the 1980s. It was a voice recognition computer used mostly in airline reservations, using DSP chips. TI had Speak and Spell, before that.]

SRI’s DynaSpeak® and EduSpeak® natural language speech recognition technologies are used in other products and services. The DynaSpeak engine, for example, was used in the IraqComm™ speech translation system. IraqComm was deployed with U.S. forces in Iraq to perform two-way, speech-to-speech machine translation between English and colloquial Iraqi Arabic.

The EduSpeak speech recognition toolkit is specifically designed for developers of language-learning applications (such as for English as a Second Language, or ESL) and other educational and training software.

In 2002, SRI released the first national evaluation of the U.S. Public Charter Schools Program established in 1994 by the U.S. Department of Education. [Again, the US Department of Education and any Federal involvement with Education should be abolished.]

Charter schools are public schools that operate without many of the regulations that apply to “traditional” public schools. Parents can choose to send their children to a charter school; no tuition is charged. Each school has a charter—a contract that outlines the school’s mission, program, and types of students served. In 2011, the number of students attending public charter schools across the U.S. surpassed two million. [Why, when only 17% do better than traditional schools and 35% do worse, with 48% doing about the same? As a group they are a failure. This is an Appeal to Numbers Logical Fallacy, in that, you say that this good because over 2 million students are in Charter Schools. With there being about 54.5 million grade school students in the US, that is, 52.5 million or 96% are not in Charter schools.]  As the movement has grown, it also has struggled. Stories have emerged about difficulties that charter schools faced in their first months and years. [My statistics are recent.]

SRI’s evaluation showed that in general, charter schools had overcome many of the start-up challenges identified in earlier research. Key findings included that on average, more than half of the students in charter schools were members of ethnic minorities, 12 percent received special education services, and 6 percent were English language learners. [Talk about separate but equal? This is what forced integration (busing) of the 1950s-60s tried to abolish.]

The U.S. Department of Education [Again it should be abolished.] chose SRI to lead development of an action plan to transform American education by leveraging technology’s ability to support learning from any location and throughout a person’s lifespan. [American Education, K-12 and for their adulthood, too?]

Transforming American Education: Learning Powered by Technology reflects an increased understanding of how to support learning as well as the advent of broadband networks and Web 2.0 technologies. Rather than treating the use of technology in schools as an end in its own right, the plan explains key ideas on how to support learning, and describes technology’s role in enabling implementation of these ideas.

Key elements of the plan are five goals for implementing a 21st century vision of learning supported by technology; recommended actions for states, districts, the federal government and other stakeholders; and an agenda for research to address grand challenges in education.

Secretary of Education Arne Duncan released the plan in November 2010. In its executive summary, he noted, “The NETP is a five-year action plan that responds to an urgent national priority and a growing understanding of what the United States needs to do to remain competitive in a global economy.” [Again wrong premise. Education of the masses has or had almost nothing to do with the economy.]

I guess I need to say this again in this paper:

We became the largest economy on Earth in about 1880, long before we had hardly anybody graduating high school. We became that way because of the abuse of immigrants. The rich got richer off the backs and sweat of cheap immigrant labor. By 1910 we had only about 10% high school graduation rate. By 1920 it was about 20%, and by 1930-mid 1940s it was about 30%. So, we had the highest high school graduation rates, up until that time, in our nation’s history and yet had mass unemployment. FDR’s and or the Federal Reserve’s policies prolonged the Depression and made it the GREAT Depression. We became, arguably the most powerful nation in the history of the world only after WWII. The spoils of war got us out of the Depression. We were about the only country left unscathed. Europe and Japan and maybe China (and was mainly still an agrarian society) were in shambles. Jobs were plentiful so we could afford to educate our children more. It was just after WWII that high school became mandatory.

Today we have about 90% of our population that have either a high school diploma or GED. And we have more college graduates than any other country and yet we have millions unemployed or underemployed. We have 54.5% of all college graduates are NOT in jobs that require a college degree. We have them doing jobs that a high school dropout could do.

We have the best educated citizenry in the history of the World and yet mass unemployment.

So, tell me that education of the masses is needed to have a booming economy.

A recent PISA test results that showed a country in poverty did better than we did tends to have two conclusions. 1) Poverty does not cause an inability to learn. 2) The ability to teach the young does not mean a booming economy.

So, again, tell me that education of the masses has much to do with the economy.

To date since its release, 16 states [Only 16? Why so few? If this is so right then why not 40-50 states? This is not even 1/3 of the states. If this were a Constitutional Amendment that needed ratification by the states, then it would have failed miserably. It would take 34 States, so you would not have even ½ of what would be needed.] have used the National Education Technology Plan in developing their own state technology plans, as a resource for state educators, or in developing state activities in support of learning technology. The Plan was cited in Hawaii’s Senate Bill 2482 to create a trust fund to support teaching science and technology in the state’s public schools. [Teaching of STEM is NOT necessary especially in K-12.]

The plan sets goals to use technology to improve student learning, accelerate and scale up adoption of effective practices, and use analytics to customize learning to each student and for continuous improvement.

It calls for every learner and teacher to have a computing device connected to always-on Wi-Fi. It describes how teachers can leverage this infrastructure to better connect their students to each other, to community resources for learning, and to distant experts and data sources. [Again, why? Aren’t their textbooks enough? They have always been enough before, why not now?] Examples from forward-looking schools and classrooms illustrate students’ use of Web 2.0 technologies to communicate, collaborate, and create their own content. [What is so great about any of this?]

I would say that most of SRI’s so-called accomplishments directed at Education have missed the mark. So, why would anyone listen to them?

______________________________________________________________________________

Back to the original piece—

Relatively low-cost digital technology is ubiquitous in daily life and work. [Not as low cost as most people think. Yes it is ubiquitous therefore is NOT needed in schools.] The Web is a vast source of information, communication, and connection opportunities available to anyone with Internet access. [Most of it is personal opinion and sales/marketing ploys. It is extremely fad laden.] Most professionals and many students have a mobile device in their pocket with more computing power than early supercomputers. [So? Students should not carry these around at least not in school.] These technological advances hold great potential for improving educational outcomes, but by themselves hardware and networks will not improve learning. Decades of research show that high-quality learning resources and sound implementations are needed as well. [Decades of research? Whom do you cite? I want to see it. Most research was done only recently, after the push for technology had already started. In other words baseless. The recent research is short-term, small sample sizes, and inconclusive at best. There is no research worth its salt out there. The research is done mainly by companies as a marketing ploy to sell its service or product.]

The learning sciences have found that today’s technologies offer powerful capabilities for creating high-quality learning resources, such as capabilities for visualization, simulation, games, interactivity, intelligent tutoring, collaboration, assessment, and feedback. [Visualization? Not sure that this is absolutely necessary. It is similar to simulation. Both cause the kids NOT to use their own imaginations, but someone else’s. Games are not needed either. Interactivity? Since when was this necessary? Intelligent tutoring? Human tutoring is better. Collaboration is NOT necessary. Assessment can be done best by the teacher. Feedback – again teacher is best. What the proponents have done is list some things that computers can do and say now we need these things.] Further, digital learning resources enable rapid cycles of iterative improvement, and improvements to resources can be instantly distributed over the Internet. [Most of what is taught in school (K-12) is and should be those things that are pretty set in stone, things that do not change. Why are you teaching fads? Ah yes, because the Internet is fad laden!!!] In addition, digital technologies are attracting exciting new talent, both from other industries and from the teacher workforce itself, into the production of digital learning resources. [So what?] Yet even with so many reasons to expect dramatic progress, something more—better use of evidence— is needed to support the creation, implementation, and continuous enhancement of high-quality learning resources in ways that improve student outcomes. [Why continuous enhancement? Why enhancement at all? They do not necessarily improve student outcomes.]

In a digital world, evidence fuels innovation and makes improvement possible. Evidence is what separates real advances from mere novelties, enhanced learning from increased entertainment. In the recent past, evidence has been relatively scarce in education. And the quality of the best available evidence has often been disappointingly weak. [Exactly my point. NO PROOF!!!] How can education decision-makers obtain the increased quality and quantity of evidence needed to fuel innovation and optimize the effectiveness of new learning resources? [NEW LEARNING RESOURCES ARE NOT NEEDED.]

This report argues for two critical steps.

First, education must capitalize on the trend within technology toward big data. [Yes systems do tend to grow over time but growth in and of itself is not good. Bigger is not better.] New technologies can capture, organize, and analyze vast quantities of data. In the recent past, data on learning had to be laboriously and slowly collected, and consequently, data were scarce. Now, new technology platforms collect data constantly and organize data automatically. As learning resources are deployed on these platforms, learners will generate vast quantities of data whenever they interact with learning resources. These data can be available to inform both educational resource development and instructional decision making. [It will be to micromanage education, which is wrong. Also, it tries, unsuccessfully, to turn the art of education into the science of education. Education is an ART.]

Further, new types of data are becoming available. Student data have long focused on broad, relatively static categories—such as student demographic characteristics, grade level, and end-of-year grades and test scores. Now, student data are far more dynamic, [It does not have to be real-time/dynamic.] as learning systems capture extremely fine-grained information on such things as how students interact with learning resources and with others using the same resource. [Do you not see that none of this is necessary if computers are NOT used.] Whereas older data mostly measured outcomes of learning, now data can be more closely tied to the process of learning. [Still not close enough to make it viable—reliable data.] Whereas in the past data were typically collected in a single context, such as classrooms or districts, now data collected in different parts and at different levels of the educational system can be more easily linked. [So What? Again not necessary.] Whereas in the past data collected by different people through different methodologies tended to be reported in isolation, with different types of reports on the same product available in many different places, now websites can easily aggregate ratings and evidence from multiple sources in a single reference site. [All such data is bogus. You cannot say that any one teacher or other resource caused the effects you found. Too much goes into learning that is outside the school’s and researcher’s control or knowledge.]

New technologies thus bring the potential of transforming  education from a data-poor to a data-rich enterprise.  [Yes, bogus data.] Yet while an abundance of data is an advantage, it is not a solution. Data do not interpret themselves and are often confusing—but data can provide evidence for making sound decisions when thoughtfully analyzed. Sound decisions must be made at each step of a continuous improvement process to successfully guide refinements. Without thoughtful analysis of data, iteration is a random walk. [Yes but thoughtful analysis of bogus data yields bogus results.]

The second step is a revitalized framework for analyzing and using evidence that can go hand-in-hand with newly abundant sources of data. [Again, garbage in garbage out. Abundant data does not mean better or good.] In the recent past, policymakers and funders have pressed for gold standard evidence. Gold standard evidence is best produced by conducting a randomized controlled trial in which learners are assigned to contrasting conditions randomly. [Problem is, you cannot control the environment.] Gold standard evidence can establish when an educational intervention caused an improved educational outcome. While gold standard evidence is valuable, the pathway to achieving it has been slow and expensive. In particular, the cost and time needed are often poor matches to the rapid pace of digital development practices. [We rely way too much on technology, in general, and for education in particular. Proponents of education reform want to increase the use of technology. WHY?]

Other approaches to gathering and using evidence can be appropriate, depending on the goal and the circumstances. Developers and educators make myriad decisions every day. The perfect can be the enemy of the good when one puts off fixing an urgent or simple or small-scale problem until gold standard evidence is in hand. An evidence framework should help educational stakeholders align their methods of obtaining evidence—which can include randomized controlled trials—with their goals, the risks involved, the level of confidence needed, and
the resources available. [Flat out IMPOSSIBLE.]

Purpose of This Report

This report combines the views of education researchers, technology developers, educators, and researchers in emerging fields such as educational data mining and technology-supported evidence-centered [Evidence is flawed since you cannot control or monitor everything within the environment, nor assign what you can control, a proper weighted average.] design to present an expanded view of approaches to evidence. It presents the case for why the transition to digital learning warrants a re-examination of how we think about educational evidence. The report describes approaches to evidence-gathering that capitalize on digital learning data and draws implications for policy, education practice, and R&D funding.

Contents of This Report

This report describes how big data and an evidence framework can align across five contexts of educational improvement. It explains that before working with big [BOGUS!] data, there is an important
prerequisite: the proposed innovation should align with deeper learning objectives and should incorporate sound learning sciences principles. [No to both. Innovation is not needed and most cases it is bad when it comes to education.] New curriculum standards, such as the Common Core State Standards and the Next Generation Science Standards, emphasize deeper learning objectives. [These two should be abolished as well.] Unless these are substantively addressed at the core of a learning resource, it is unlikely the resource will meet these important objectives. Likewise, a proposed innovation is more likely to succeed if it is grounded in fundamental principles of how people learn. Once these prerequisites are met, the evidence framework describes five opportunities for utilizing big data, each in a different educational context:

1. During development of an innovative learning resource, educational data mining and learning analytics can uncover patterns of learner behavior that can be used to guide improvement. Further, A/B testing can compare alternative versions of a Web-based product with thousands of users in a short time period, leading to insights as to whether alternative A or alternative B is more promising. [Again, not necessary if this approach is scrapped.] A key challenge for these uses of evidence is to identify the relationship between simple user behaviors and complex learning objectives. A further challenge is that those interpreting user data often do so with little access to the learner’s context. A complement to data mining and A/B testing evidence is design-based implementation research, which collects extensive data from learners and teachers in a realistic setting. The purpose of design-based implementation research is to engage designers with implementation contexts, because improving learning depends on achieving good implementations of new resources in realistic contexts. Design-based implementation research brings contextual insights, which can guide interpretation of data mining and A/B testing results and support the development and continuous improvement of learning resources.
[Again, data will be bogus.]

2. A s learners use a digital resource, adaptive learning systems can personalize learning by using big data with new evidence models. [NO NEED AGAIN!] Conventionally, learning resources are published in books and are the same for all learners. [What is wrong with that? Since we should teach kids that which is true and generally unchanging then should we not teach the same thing, as found in a textbook?] With digital resources, however, each learner can have a different pace, style of presentation, or type of content. [This is not necessary.] Big data can be used to collect extensive information about individuals or groups of learners, and the data can be used to adapt a learning resource to the learner. For example, in an intelligent tutor system, real-time data can identify the exact step in a complex problem where a student goes wrong and provide feedback specific to that step (rather than providing feedback on the whole problem or to a whole group). Data can also be collected that reveal relationships between options in the learning process as well as increases in learning outcomes, and students can be presented with options that have shown to work better for them. Adaptations can also be based on motivational or affective factors. Further, teachers can be the agents of adaptation, making instructional decisions based on rich data collected from their students. The major challenge in these uses of evidence has been the difficulty finding robust interactions between characteristics of users and alternative ways that learning resources can be adapted to produce learning gains. Although many find it obvious that learning can be personalized, it actually takes quite a bit of work to pin down solid evidence of combinations of user characteristics and specific adaptations that matter. Rather than blanket statements about the value of personalization, evidence that specific learning system adaptations produce better learning for specific types of users is needed, and these findings need to be positive, stable and reproducible. [Granted this would be preferable but I think impossible. You cannot determine what one thing did what when it comes to learning. Kids are always learning whether in school or not. Again, there is just too many variables to account for. Blanket statements have never been proven either, when it comes to education.] The rapid A/B testing possible with digital learning systems means that we now have the ability to investigate relationships among user characteristics, system adaptations, and learner outcomes much more efficiently than before.

3. As institutions try to support struggling students, big data and new data analysis techniques can help guide intervention. [Why intervention? Most students do not struggle. Granted the higher they go the harder it gets. The whole of idea of technology in education was to make it easier for them.] Most states now have statewide data systems with a standard student identifier for each student, which can make it easier to track data about students as they transition among education settings. Some school districts now are also experimenting with linking administrative data in student information systems to records and events in learning management and digital learning systems. Those data, in turn, can be combined with data from social services agencies that students may engage with outside school, such as the juvenile justice system, the foster care system, or youth development programs. Linking these various types of data can lead school systems to ask new kinds of questions and to better understand relationships between students’ conditions outside school and their in-school behaviors and experiences. Increasingly sophisticated techniques for predictive analytics, which combine a variety of disciplines including statistics, data mining, and game theory, are being used to investigate whether some student behaviors are predictors of school failure and dropping out. The key evidence challenge is establishing the external validity of the “signal” provided by technology. Most early warning systems are based on correlational data patterns. The interpretation of those patterns can lead to the design of interventions, but those interventions may or may not be more effective than a placebo. Classical randomized controlled trials can test the effectiveness of an intervention in particular venues. Alternatively, sophisticated modeling techniques and longitudinal analyses can help rule out alternative explanations for positive trends in student outcomes following an intervention. [Stop interfering with the child. Stop interventions, unless the parents ask for help, as in tutoring.]

4. As educational systems assess student achievement, big data and new evidence models can shift measurements to focus more on what is really important and to provide more timely information to educators and students. As demands shift in the 21st century, new outcomes such as collaboration, problem solving, and critical
thinking become even more important than in the past. [No they are not. These should wait until college, not K-12. Most jobs require only a high school diploma but I would submit that most of these can be done by high school dropouts. High school students are already doing some of these.] Yet these competencies are rarely measured by high-stakes tests. [Abolish high stakes tests. They really prove nothing.] Further, the current generation of high-stakes tests are mostly given at year’s end. As assessments are delivered via technology, they can accumulate data on a student’s accomplishments throughout the year and can offer feedback more formatively. The evidence challenge, however, is that even with technology, it is hard to design assessments to measure what is truly important with reliability and validity. [Exactly what I have been saying, only I would add that you cannot truly measure anything important  from any test.] Evidence-Centered Design (ECD) is an emerging [Emerging? So, you still know not much about it?] approach to addressing these challenges. In the past, ECD had been labor intensive, but technology support systems for applying it to assessment development have recently emerged. In addition, combining ECD with assessments embedded in digital learning systems opens up possibilities for assessing noncognitive features, such as persistence and leadership, [Why would you measure these things? This what I mean about testing/measuring too much.] that are recognized as important but that could not be measured reliably and efficiently in the past. A continuing challenge for both technology-embedded and traditional assessments is determining whether the measured outcomes transfer outside the tested context.

5. As educators choose and adapt learning resources from the vast array now offered on the Internet, big data and new evidence models can inform their choices. Ideally, many educators would like to make all their choices based on evidence of effectiveness established through randomized controlled trials. However, the production of rigorous effectiveness studies cannot keep pace with the abundance of digital learning resources, and thus educators often make decisions in the absence of evidence of effectiveness. Further, even when effectiveness data are available, educators have additional selection criteria, such as ease of implementation and likely appeal to their particular students.

Methods used in e-commerce are now emerging in education: [This is wrong. Education is NOT a business. Businesses fail far more often than they succeed. Why would you want to model anything after business?]

 user reviews and ratings of digital learning
resources in online education repositories;

[NO!]

 user panels, which are sizable managed online communities that are used to provide prompt feedback to test a product’s usability, utility, pricing, market fit, and other factors;

[Prompt feedback is not necessary.]

 expert ratings and reviews to provide curated sets of learning resources and recommendations on how to use them; and

[Are they any true experts in this? No field could be more in its infancy. The supporters have said for a number of years now and in this report as well that research has been inconclusive at best.]

 aggregations of user actions on learning resources, such as clicking, viewing, downloading, and sharing to social media.

This whole thing is like the blind leading the blind or the dumber leading the dumb.

Although reviews and recommendations are not proof of effectiveness, aggregating many user opinions has proven useful in other areas of the economy in helping users anticipate what their experience with a new product might be. In addition, schools can participate in test beds of schools [It is this experimentation that is the problem.] or classrooms that have committed to working with a community of researchers to put the necessary infrastructure (for example, data sharing agreements and classroom technology) in place to test new learning technologies. The “alignment” of learning resources to educational standards is a key issue for which evidence is needed but often lacking. Often products advertise alignments that are superficial and fail to address the details of new standards. Efforts are under way to apply technology to this issue, too, with technology supports for making alignment judgments and a Learning Registry that aggregates alignment judgments from multiple sources. Currently, many different organizations are providing access to different types of evidence related to the quality of digital learning resources. This fragmentation suggests the need for an objective third-party organization that can serve as a trusted source of evidence about the use, implementation, and effectiveness of digital learning resources. [I have already said it, that is, the bias in the so-called research, from companies.]

Summary

Overall, this report recommends an approach to evidence that is continuous and nonlinear—incorporating new information constantly as it becomes available and using that information for improvement. In the new world of digital resources, older approaches to evidence that are highly linear or focus exclusively on gold standard methods may not be as useful as reflective approaches that integrate multiple forms of evidence. [Again, multiple bogus forms of evidence.] This report offers an evidence strategy framework that acknowledges that decisions require different levels of confidence [I do not. I require a near absolute level of confidence that will never be attained.] and entail different levels of risk. When an educator has high confidence in the fundamentals of a product and expects that a resource can be safely used, a rapid iterative approach to improved implementation may be appropriate. Conversely, if confidence is low or risk is perceived as high, different approaches to gathering and evaluating data make more sense. The ideas presented in this report have implications for learning technology developers, consumers, education researchers, policymakers, and research funders. The Technical Working Group of researchers and policymakers who provided input and guidance for this evidence framework also developed a set of recommendations for putting the framework into action. The resulting 14 recommendations for capitalizing on new approaches to evidence as digital resources take center stage in education appear on the next page. The report also includes cautionary notes about the ethical issues that must be tackled in handling student data. [Yes about rights to privacy, which cannot be assured.]

Recommendations

The following recommendations are designed to help education stakeholders turn the ideas presented in this report into action. Detailed explanations of each recommendation are in the Summary and Recommendations section of this report.

1 Developers of digital learning resources, education researchers, and educators should collaborate to define problems of practice that can be addressed through digital learning and the associated kinds of evidence that can be collected to measure and inform progress in addressing these problems. [Yes you need to justify your jobs. This is like closing the barn door after the horse has already bolted. You are trying to find reasons to use technology after you started using it.]

2. Learning technology developers should use established basic research principles and learning sciences theory as the foundation for designing and improving digital learning resources. [Perhaps, but no proof exists to justify digital learning.]

3. Education research funders should promote education research designs that investigate whether and how digital learning resources teach aspects of deeper learning such as complex problem solving and promote the transfer of learning from one context to many contexts. [Deeper learning is not needed in K-12; in college yes.]

4. Education researchers and developers should identify the attributes of digital learning systems and resources that make a difference in terms of learning outcomes. [Yes they should but they cannot.]

5. Users of digital learning resources should work with education researchers to implement these resources using continuous improvement processes. [Continuous improvement did not work in business in the 1980s, why is you think it will work for education now? We copied it off the Japanese and both of our economies tanked. Again, education is not a business.]

6. Purchasers of digital learning resources and those who mandate their use should seek out and use evidence of the claims made about each resource’s capabilities, implementation, and effectiveness. [The evidence is strictly a sales pitches, a sales and marketing ploy and not proof of anything.]

7. Interdisciplinary teams of experts in educational data mining, [Only the kids and their parent need to know how the person is doing. Datamining’s aim to make that data available to almost anyone. It is called an invasion of privacy.] learning analytics, and visual analytics should collaborate to design and implement research and evidence projects. [Again, projects are not necessary. It is just another thing the computer can do but not necessary for the individual to do. Collaboration can be done without a computer but it is more important the each individual does his or her own work.] Higher education institutions should create new interdisciplinary graduate programs to develop data scientists who embody these same areas of expertise. [God I hope not. You are just throwing ethics out the window. Computer science/mathematics already does but they do not necessarily push this onto people. Remember most of what is taught in a 4-year college is NOT in their major.]

8. Funders should support creating test beds for digital learning research and development that foster rigorous, transparent, and replicable testing of new learning resources in low-risk environments. [Unfortunately, this type of education makes it a high-risk by its very nature, for its reasons for being in the first place. Relying on tests to judge effectiveness is very suspect. It is also causing teachers to quit the field and teachers and administrators to forge answers on these tests, to cheat, as well.]

9. The federal government should encourage innovative approaches to the design, development, evaluation, and implementation of digital learning systems and other resources. [Sorry, the Federal government should stay out of education altogether, the US Constitution don’t you know.]

10. Stakeholders who collect and maintain student data should participate in the implementation of technical processes and legal trust agreements that permit the sharing of data electronically and securely between institutions, complying with FERPA and other applicable data regulations and using common data standards and policies developed in coordination with the U.S. Department of Education. [This is impossible. Too many sites are being hacked daily. Data security is an illusion, as is security itself.]

11. Institutional Review Board (IRB) documentation and approval processes for research involving digital learning systems and resources that carry minimal risk should be streamlined to accelerate their development without compromising needed rights and privacy protections. [Impossible. Rights will be trampled.]

12. R&D funding should be increased for studying the concognitive aspects of 21st-century skills, namely, interpersonal skills (such as such as communication, collaboration, and leadership) [These are not necessary to teach. Communication is already ‘taught’ in English classes. Collaboration is something that happens in your job, naturally. Teaching leadership K-12 is silly.] and intrapersonal skills (such as persistence and self-regulation). [Self-regulation of children? Persistence should happen, if for no other reason than the kids spend 13 years in school.]

13. R&D funding should promote the development and sharing of open educational resources that include assessment items that address learning transfer. [Plain and simple NO.]

14. The federal government and other interested agencies should fund an objective third-party organization as a source of evidence about the usability, effectiveness, and implementation of digital learning systems and resources. [Again the federal government should stay out of this. There should be no 3rd party organizations either. Again, education should be for the parents to control.]

I hope that I have condemned this report from beginning to end.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: