In other words, the claim was just a fantasy, an illusion, entirely made up by the PISA team. But PISA keeps repeating its assertion that measures skills needed for the future. The strategy worked. PISA successfully convinced people through repetition.
The report has been challenged by a number of scholars (Kamens, 2015; Klees, 2016; Komatsu & Rappleye, 2017; Stromquist, 2016). One of the most devastating problems with the conclusion of significant relationship between test scores and economic growth is the logic underlying the analysis utilized to reach the conclusion. The report compared test scores in a given period (1964-2003) with economic growth during roughly the same period (1960-2000), which is logically flawed because the students who took the test were not in the workforce at the time. It takes time for the students to enter the workforce and make up a significant portion of the workforce.
“Test scores of students in any given period should be compared with economic growth in a subsequent period” (Komatsu & Rappleye, 2017, p. 170). Studies that compared test scores with economic growth in the subsequent periods using the same dataset and method found no “consistently strong nor strongly consistent” relationship between test scores and economic growth and “that the relationship between changes in test scores in one period and changes in economic growth for subsequent periods were unclear at best, doubtful at worst (Komatsu & Rappleye, 2017, p. 183), essentially invalidating the claims made in the report.
Even if the claims were valid, they primarily relied on results of international assessments besides PISA. While the report states that it “uses recent economic modeling to relate cognitive skills — as measured by PISA and other international instruments — to economic growth” (Hanushek & Woessmann, 2010, p. 6), the fact is that results from PISA constituted a very small portion of the data used in the modeling. Only three rounds of PISA had been offered by the time the report was released. Moreover, the economic data covered the period of 1960 to 2000, the year when PISA was first implemented. Only one round of PISA data was included but the report relied on “data from international tests given over the past 45 years in order to develop a single comparable measure of skills for each country that can be used to index skills of individuals in the labour force” (Hanushek & Woessmann, 2010, p. 14).
Hanushek and others (Hanushek, 2013; Hanushek & Woessmann, 2008; Hanushek & Woessmann, 2012) have repeated similar claims about the economic impact of improving PISA. Whether the conclusions are correct is a different matter. The point is that PISA’s claim to measure something different from other international assessments is a lie. It indeed measures the same construct as others. The claim to better measure what matters in the modern economy or the future world than other tests that had been in existence prior to the invention of PISA is but a made-up illusion.
A Monolithic View of Education
Underlying PISA’s claim is the assumption that there is a set of skills and knowledge that are universally valuable in all societies, regardless of their history and future. “A fundamental premise for the PISA project is that it is indeed possible to ―measure the quality of a country‘s education by indicators that are common, i.e. universal, independent of school systems, social structure, traditions, culture, natural conditions, ways of living, modes of production etc.” (Sjøberg, 2015, p. 116). But this assumption is problematic.
The first problem is that there is more than one society in the world and societies are different from each other. For all sorts of reasons — cultural, political, religious, and economical — different societies operate differently and present different challenges. Meeting different challenges requires different knowledge and skills. As a result, “one can hardly assume that the 15-year olds in e.g. USA, Japan, Turkey, Mexico and Norway are preparing for the same challenges and that they need identical life skills and competencies” (Sjøberg, 2015, p. 116).
The second and a bigger problem with PISA’s assumption of a universal set of valuable skills and knowledge for all countries is its imposition of a monolithic, primarily Western view of societies. PISA was first and foremost developed to serve member states of OECD, most of which are the world’s most advanced economies with only a few exceptions such as Mexico, Chile and Turkey. The 35 OECD members in no way represent the full spectrum of diversity across the nearly 200 countries in the world today. The assumptions supporting PISA are primarily based on the economic and education reality of OECD members. Not surprisingly, “the PISA framework and its test are meant for the relatively rich and modernized OECD-countries. When this instrument is used as a ‘benchmark’ standard in the 30+ non-OECD countries that take part in PISA, the mismatch of the PISA test with the needs of the nation and its youth may become even more obvious” (Sjøberg, 2015, p. 116).
Distorted View of Education
Although PISA claims that it does not assess according to national curricula or school knowledge, its results have been interpreted as a valid measure of the quality of educational systems. But the view of education promoted by PISA is a distorted and extremely narrow one (Berliner, 2011; Sjøberg, 2015; Uljens, 2007). PISA treats economic growth and competitiveness as the sole purpose of education. Thus it only assesses subjects — reading, math, science, financial literacy, and problem solving — that are generally viewed as important for boosting competitiveness in the global economy driven by science and technology. PISA shows little interest in other subjects that have occupied the curricula of many countries such as the humanities, arts and music, physical education, social sciences, world languages, history, and geography (Sjøberg, 2015).
While preparing children for economic participation is certainly part of the responsibility of educational institutions, it cannot and should not be the only responsibility (Labaree, 1997; Sjøberg, 2015; Zhao, 2014, 2016). The purpose of education in many countries includes a lot more than preparing economic beings. Citizenship, solidarity, equity, curiosity and engagement, compassion, empathy, curiosity, cultural values, physical and mental health, and many others are some of the frequently mentioned purposes in national education goal states. But these aspects of purpose of education “are often forgotten or ignored when discussions about the quality of the school is based on PISA scores and rankings” (Sjøberg, 2015, p. 113).
The distorted and narrow definition of the purpose of education is one of the major reasons for some of the peculiar and seemingly surprising discoveries associated with PISA. There is the persistent pattern of negative correlation between PISA scores and students’ interest and attitude. Many researchers have found that higher PISA scoring countries seem to have students with lower interest in and less positive attitude toward the tested subject (Bybee & McCrae, 2011; Zhao, 2012, 2014, 2016). For example, PISA science score has a significant negative correlation with future science orientation and with future science jobs (Kjærnsli & Lie, 2011). High PISA scores have also been found to be associated with lower entrepreneurship confidence and capabilities (Campbell, 2013; Zhao, 2012). Moreover, high PISA scoring education systems seemed to have a more authoritarian orientation (Shirley, 2017; Zhao, 2014, 2016). Additionally, PISA scores have been found to have a negative correlation with student wellbeing (Shirley, 2017; Zhao, 2014, 2016), a finding that was finally openly acknowledged by PISA in a 2017 report (OECD, 2017). These findings basically suggest that PISA only measures a very narrow aspect of education and neglects to pay attention to the broader responsibilities of educational systems. Furthermore, pursuing the narrowly defined purpose of education may come at the cost of the broader purpose of education (Zhao, 2017, 2018). “There are very few things you can summarise with a number and yet Pisa claims to be able to capture a country’s entire education system in just three of them. It can’t be possible. It is madness” (Morrison, 2013).
In summary, PISA successfully marketed itself as a measure of educational quality with the claim to measure skills and knowledge that matters in modern economies and in the future world. Upon closer examination, the excellence defined by PISA is but an illusion, a manufactured claim without any empirical evidence. Furthermore, PISA implies a monolithic and espouses a distorted and narrow view of purpose for all education systems in the world. The consequence is a trend of global homogenization of education and celebration of authoritarian education systems for their high PISA scores, while ignoring the negative consequences on important human attributes and local cultures of such systems.
Berliner, D. C. (2011). The context for interpreting PISA results in the USA: Negativism, chauvinism, misunderstanding, and the potential to distort the educational systems of nations. In M. A. Pereyra, H.-G. Kotthoff, & R. Cowen (Eds.), Pisa Under Examination (pp. 77-96). New York: Springer.
Bybee, R., & McCrae, B. (2011). Scientific literacy and student attitudes: Perspectives from PISA 2006 science. International Journal of Science Education, 33(1), 7-26.
Hanushek, E. A. (2013). Economic growth in developing countries: The role of human capital. Economics of Education Review, 37, 204-212.
Hanushek, E. A., & Woessmann, L. (2008). The role of cognitive skills in economic development. Journal of Economic Literature, 46 (607—668).
Hanushek, E. A., & Woessmann, L. (2012). Do better schools lead to more growth? Cognitive skills, economic outcomes, and causation. Journal of economic growth, 17(4), 267-321.
Hopmann, S. T. (2008). No child, no school, no state left behind: Schooling in the age of accountability. Journal of Curriculum Studies, 40(4), 417-456.
Kamens, D. H. (2015). A maturing global testing regime meets the world economy: Test scores and economic growth, 1960—2012. Comparative Education Review, 59(3), 420-446.
Kjærnsli, M., & Lie, S. (2011). Students’ preference for science careers: International comparisons based on PISA 2006. International Journal of Science Education, 33(1), 121-144.
Klees, S. J. (2016). Human capital and rates of return: brilliant ideas or ideological dead ends? Comparative Education Review, 60(4), 644-672.
Komatsu, H., & Rappleye, J. (2017). A new global policy regime founded on invalid statistics? Hanushek, Woessmann, PISA, and economic growth. Comparative Education, 53(2), 166-191.
Labaree, D. (1997). Public Goods, Private Goods: The American Struggle over Educational Goals. American Educational Research Journal, 34(1), 39-81.
Shirley, D. (2017). The New Imperatives of Educational Change: Achievement with Integrity. New York: Routledge.
Sjøberg, S. (2015). PISA and Global Educational Governance-A Critique of the Project, its Uses and Implications. Eurasia Journal of Mathematics, Science & Technology Education, 11(1), 111-127.
Stromquist, N. P. (2016). Using regression analysis to predict countries’ economic growth: Illusion and fact in education policy. Real-World Economics Review, 76, 65-74.
Uljens, M. (2007). The Hidden Curriculum of PISA: The Promotion of Neo-Liberal Policy By Educational Assessment. In S. T. Hopmann, Gertrude Brinek, & M. Retzl (Eds.), PISA zufolge PISA — PISA According to PISA (pp. 295-303). Berlin: Lit Verlag.
Zhao, Y. (2012). World Class Learners: Educating Creative and Entrepreneurial Students. Thousand Oaks, CA: Corwin.
Zhao, Y. (2014). Who’s Afraid of the Big Bad Dragon: Why China has the Best (and Worst) Education System in the World. San Francisco: Jossey-Bass.
Zhao, Y. (2016). Who’s Afraid of PISA: The Fallacy of International Assessments of System Performance. In A. Harris & M.S. Jones (Eds.), Leading Futures (pp. 7-21). Thousand Oaks, CA: Sage.
Zhao, Y. (2017). What Works Can Hurt: Side Effects in Education. Journal of Educational Change, 18(1), 1-19.
Zhao, Y. (2018). What Works May Hurt: Side Effects in Education. New York: Teachers College Press.