Trends in Basic Reading Skills among School-Age Children
- Of the three age groups assessed, the youngest students experienced the greatest gains from 1971 to 2012. In the latter year, 74% of nine-year-olds—up 15 percentage points from 1971—demonstrated at least partially developed reading skills (Indicator I-01a).
- In 2012, 96% of nine-year-old students demonstrated the ability to perform simple, discrete reading tasks associated with the most basic performance level—the same as in 2008. This percentage is the highest recorded over the time period beginning in 1971 and reflects an increase of five percentage points above the 1971 share of basic-level performers. After remaining static for approximately three decades, the percentage of students scoring at the highest performance level (demonstrating the ability to interrelate ideas and make generalizations), began to rise after 1999, with the share increasing six percentage points to 22%.
- From 1971 to 2004, scores among 13-year-olds changed relatively little, with the exception of a four percentage point uptick in the share of high performers (those demonstrating the ability to understand complicated information) from 1990 to 1992 (Indicator I-01b). But more recent years have seen gains in reading performance for early adolescents. From 2004 to 2012, the percentage of 13-year-old students scoring at each of the performance levels increased, although the increase was statistically significant from 2008 to 2012 only at the intermediate level (the ability to interrelate ideas and make generalizations), where the percentage increased from 63% to 66%.
- In the most recent assessment year, 94% of 13-year-olds displayed at least partially developed reading skills and understanding (the basic performance level for this age). (The percentage demonstrating this achievement level has fluctuated by only three percentage points over the 40 years of testing.) Only 15% demonstrated the ability to understand complicated information.
- For 17-year-olds, the percentage of students demonstrating the ability to interrelate ideas and make generalizations (the basic performance level for this age) rose from 79% in 1971 to a high of 86% in 1988. Subsequently, however, this trend reversed, with scores falling back to the 1971 level by the early 2000s and remaining there through 2012 (Indicator I-01c).
- The trend in midlevel performance among 17-year-olds (demonstrated by an ability to understand complicated literary and informational passages) was similar: an increase followed by reversion to the original level of 39% in 2008 and 2012. The share of students leaving high school with the ability to extend and restructure ideas drawn from specialized or complex texts (competencies associated with the highest performance level) was 6% in 2012, having not changed by a statistically significant amount since the assessment was first administered in 1971.
* See this indicator’s supporting table for information as to which year-to-year changes are statistically significant. Data for years 2004 and later are based on a revised assessment. See https://nces.ed.gov/nationsreportcard/ltt/howdevelop.aspx for an overview of the differences between the original and revised assessments.
Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, The Nation’s Report Card: Trends in Academic Progress 2012, NCES 2012-456 (Washington, DC: U.S. Government Printing Office, 2012), 15 fig. 6.
The National Assessment of Educational Progress (NAEP) includes two assessments in reading. The first, currently administered every two years and usually referred to as the “main” NAEP reading assessment, changes in response to the current state of curricula and educational practices. The second test generates long-term trend data. Administered every two to five years, this examination has remained essentially unchanged since it was first given to students in 1971; it features shorter reading passages than the main NAEP assessment and gauges students’ ability to locate specific information, make inferences, and identify the main idea of a passage. (For a detailed comparison of the two assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp.)
The NAEP long-term trend exam (LTT), upon which this indicator is based, is taken by a nationally representative sample of students in each of three age groups: 9-year-olds, 13-year-olds, and 17-year-olds. The NAEP LTT uses a single scale, referred to as a “performance scale,” to assess all students. What constitutes “basic,” “proficient,” and “advanced” performance depends on the age of the examinee. Both a 9-year-old and a 17-year-old may score at Level 250 (able to interrelate ideas and make generalizations). Such a score would constitute an advanced level of performance on the part of the 9-year-old and a basic level of performance on the part of the 17-year-old. The percentages indicated on the graphs displaying LTT data (Indicators I-01a, I-01b, and I-01c) are cumulative totals; they indicate the percentage of students in each grade level scoring at or above each performance level. (The LTT performance thresholds are constructed at 50-point intervals and range from 150 to 350; the three performance levels at which the bulk of students scored are included on the graph).
Although the performance levels which constitute “basic,” “proficient,” and “advanced” are different for each of the age groups, in order to facilitate interpretation of the LTT graphs the color-coding of the levels is consistent across them. Blue represents the percentage of students scoring at or above the most basic performance level for that age group. Red represents the percentage scoring at or above the intermediate performance level. Green represents the percentage scoring at or above the advanced performance level.
In 2004, the LTT was updated in several ways. Content and administration procedures were revised, and, for the first time, accommodations were made for English language learners and students with disabilities that would allow these students to be included in the assessment (they have been included in the main NAEP reading assessment since 1996). Both the original and revised formats were administered in 2004 so the National Center for Educational Statistics (NCES) could investigate the effects of the new format on scores. This “bridge” study indicated that differences in average student scores between the two formats were solely attributable to the inclusion of students with disabilities and English language learners in the testing population. On the basis of these findings, NCES concluded, “bearing in mind the differences in the populations of students assessed (accommodated vs. not accommodated), future assessment results could be compared to those from earlier assessments based on the original version.” (U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, n.d. [article revised 30 March 2009], “2004 Bridge Study,” http://nces.ed.gov/nationsreportcard/ltt/bridge_study.asp.)
The NAEP Data Explorer (NDE) permits analysis of both the long-term trend and main NAEP data sets by gender, ethnicity, and other key variables. With NDE one can also obtain results of recent reading assessments for individual states and compare these with student outcomes in other parts of the country. For both an overview of NDE and tips for its effective use, see http://nces.ed.gov/nationsreportcard/pdf/naep_nde_final_web.pdf. NDE itself is located at http://nces.ed.gov/nationsreportcard/naepdata/.
In the fall of 2015 the National Assessment Governing Board, the body that oversees the NAEP, released a revised assessment schedule indicating that the LTT NAEP in both math and reading, last administered in 2012, will not be administered again until 2024. This represents a substantial change in policy, as the LTT has been administered every 4–5 years since 1971.
* See this indicator’s supporting table for information as to which year-to-year changes are statistically significant.
** Data for years 2004 and later are based on a revised assessment. See https://nces.ed.gov/nationsreportcard/ltt/howdevelop.aspx for an overview of the differences between the original and revised assessments.
Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, The Nation’s Report Card: Trends in Academic Progress 2012, NCES 2012-456 (Washington, DC: U.S. Government Printing Office, 2012), 15 fig. 6.
The National Assessment of Educational Progress (NAEP) includes two assessments in reading. The first, currently administered every two years and usually referred to as the “main” NAEP reading assessment, changes in response to the current state of curricula and educational practices. The second test generates long-term trend data. Administered every two to five years, this examination has remained essentially unchanged since it was first given to students in 1971; it features shorter reading passages than the main NAEP assessment and gauges students’ ability to locate specific information, make inferences, and identify the main idea of a passage. (For a detailed comparison of the two assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp.)
The NAEP long-term trend exam (LTT), upon which this indicator is based, is taken by a nationally representative sample of students in each of three age groups: 9-year-olds, 13-year-olds, and 17-year-olds. The NAEP LTT uses a single scale, referred to as a “performance scale,” to assess all students. What constitutes “basic,” “proficient,” and “advanced” performance depends on the age of the examinee. Both a 9-year-old and a 17-year-old may score at Level 250 (able to interrelate ideas and make generalizations). Such a score would constitute an advanced level of performance on the part of the 9-year-old and a basic level of performance on the part of the 17-year-old. The percentages indicated on the graphs displaying LTT data (Indicators I-01a, I-01b, and I-01c) are cumulative totals; they indicate the percentage of students in each grade level scoring at or above each performance level. (The LTT performance thresholds are constructed at 50-point intervals and range from 150 to 350; the three performance levels at which the bulk of students scored are included on the graph).
Although the performance levels which constitute “basic,” “proficient,” and “advanced” are different for each of the age groups, in order to facilitate interpretation of the LTT graphs the color-coding of the levels is consistent across them. Blue represents the percentage of students scoring at or above the most basic performance level for that age group. Red represents the percentage scoring at or above the intermediate performance level. Green represents the percentage scoring at or above the advanced performance level.
In 2004, the LTT was updated in several ways. Content and administration procedures were revised, and, for the first time, accommodations were made for English language learners and students with disabilities that would allow these students to be included in the assessment (they have been included in the main NAEP reading assessment since 1996). Both the original and revised formats were administered in 2004 so the National Center for Educational Statistics (NCES) could investigate the effects of the new format on scores. This “bridge” study indicated that differences in average student scores between the two formats were solely attributable to the inclusion of students with disabilities and English language learners in the testing population. On the basis of these findings, NCES concluded, “bearing in mind the differences in the populations of students assessed (accommodated vs. not accommodated), future assessment results could be compared to those from earlier assessments based on the original version.” (U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, n.d. [article revised 30 March 2009], “2004 Bridge Study,” http://nces.ed.gov/nationsreportcard/ltt/bridge_study.asp.)
The NAEP Data Explorer (NDE) permits analysis of both the long-term trend and main NAEP data sets by gender, ethnicity, and other key variables. With NDE one can also obtain results of recent reading assessments for individual states and compare these with student outcomes in other parts of the country. For both an overview of NDE and tips for its effective use, see http://nces.ed.gov/nationsreportcard/pdf/naep_nde_final_web.pdf. NDE itself is located at http://nces.ed.gov/nationsreportcard/naepdata/.
In the fall of 2015 the National Assessment Governing Board, the body that oversees the NAEP, released a revised assessment schedule indicating that the LTT NAEP in both math and reading, last administered in 2012, will not be administered again until 2024. This represents a substantial change in policy, as the LTT has been administered every 4–5 years since 1971.
* See this indicator’s supporting table for information as to which year-to-year changes are statistically significant. Data for years 2004 and later are based on a revised assessment. See https://nces.ed.gov/nationsreportcard/ltt/howdevelop.aspx for an overview of the differences between the original and revised assessments.
Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, The Nation’s Report Card: Trends in Academic Progress 2012, NCES 2012-456 (Washington, DC: U.S. Government Printing Office, 2012), 15 fig. 6.
The National Assessment of Educational Progress (NAEP) includes two assessments in reading. The first, currently administered every two years and usually referred to as the “main” NAEP reading assessment, changes in response to the current state of curricula and educational practices. The second test generates long-term trend data. Administered every two to five years, this examination has remained essentially unchanged since it was first given to students in 1971; it features shorter reading passages than the main NAEP assessment and gauges students’ ability to locate specific information, make inferences, and identify the main idea of a passage. (For a detailed comparison of the two assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp.)
The NAEP long-term trend exam (LTT), upon which this indicator is based, is taken by a nationally representative sample of students in each of three age groups: 9-year-olds, 13-year-olds, and 17-year-olds. The NAEP LTT uses a single scale, referred to as a “performance scale,” to assess all students. What constitutes “basic,” “proficient,” and “advanced” performance depends on the age of the examinee. Both a 9-year-old and a 17-year-old may score at Level 250 (able to interrelate ideas and make generalizations). Such a score would constitute an advanced level of performance on the part of the 9-year-old and a basic level of performance on the part of the 17-year-old. The percentages indicated on the graphs displaying LTT data (Indicators I-01a, I-01b, and I-01c) are cumulative totals; they indicate the percentage of students in each grade level scoring at or above each performance level. (The LTT performance thresholds are constructed at 50-point intervals and range from 150 to 350; the three performance levels at which the bulk of students scored are included on the graph).
Although the performance levels which constitute “basic,” “proficient,” and “advanced” are different for each of the age groups, in order to facilitate interpretation of the LTT graphs the color-coding of the levels is consistent across them. Blue represents the percentage of students scoring at or above the most basic performance level for that age group. Red represents the percentage scoring at or above the intermediate performance level. Green represents the percentage scoring at or above the advanced performance level.
In 2004, the LTT was updated in several ways. Content and administration procedures were revised, and, for the first time, accommodations were made for English language learners and students with disabilities that would allow these students to be included in the assessment (they have been included in the main NAEP reading assessment since 1996). Both the original and revised formats were administered in 2004 so the National Center for Educational Statistics (NCES) could investigate the effects of the new format on scores. This “bridge” study indicated that differences in average student scores between the two formats were solely attributable to the inclusion of students with disabilities and English language learners in the testing population. On the basis of these findings, NCES concluded, “bearing in mind the differences in the populations of students assessed (accommodated vs. not accommodated), future assessment results could be compared to those from earlier assessments based on the original version.” (U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, n.d. [article revised 30 March 2009], “2004 Bridge Study,” http://nces.ed.gov/nationsreportcard/ltt/bridge_study.asp.)
The NAEP Data Explorer (NDE) permits analysis of both the long-term trend and main NAEP data sets by gender, ethnicity, and other key variables. With NDE one can also obtain results of recent reading assessments for individual states and compare these with student outcomes in other parts of the country. For both an overview of NDE and tips for its effective use, see http://nces.ed.gov/nationsreportcard/pdf/naep_nde_final_web.pdf. NDE itself is located at http://nces.ed.gov/nationsreportcard/naepdata/.
In the fall of 2015 the National Assessment Governing Board, the body that oversees the NAEP, released a revised assessment schedule indicating that the LTT NAEP in both math and reading, last administered in 2012, will not be administered again until 2024. This represents a substantial change in policy, as the LTT has been administered every 4–5 years since 1971.